article
stringlengths
0
456k
abstract
stringlengths
0
65.5k
a transfer of an unknown state is a primary object of quantum information science .since the phrase `` unknown state '' suggests that the physical system is possibly entangled with another system , the foundation of this object can be related with the change of quantum correlation thorough the transfer process . associated with the maintenance of the inseparability ,a distinguishing class of local operations is the so - called _ entanglement breaking _ ( eb ) channel that breaks any entanglement , i.e. , a local operation is eb if is a separable state for any state .it is well - known that an operation is eb if and only if it can be written as a _ measure - and - prepare _ ( m&p )scheme that assigns the output states based on the classical data obtained by the measurement of the input states .when a process is not a m&p scheme , there exists an entangled state that maintains inseparability after the process and it can transmit non - classical correlations .hence , it is natural to say that the process is in _ quantum domain _ ( qd ) if the process is not a m&p scheme .this poses clear distinction between quantum processes and classical processes firmly based on the maintenance of quantum correlation . in quantum information theory ,the local operation and classical communication ( locc ) is set free to use , hence , the quantum channel is only useful if it can not be simulated by locc .the assurance of qd processes tells us that a given quantum channel is different from any locc channel .subsequently , the criterion for qd processes has been quantum benchmark of the experimental success of core physical processes , such as transmission or storage of quantum states .mathematically , the set of qd channels is connected with a set of inseparable states by jamiolkowski isomorphism , and the concept of qd is considered to be the inseparability of quantum channels . in principle, one can determine a given process by the process tomography , and check the necessary and sufficient condition for eb channel .however , tomographic reconstruction is not always easy to perform . assuming a practical channel and a limited set of experimental parameters , several qd criteria have been proposed associated with the quantum key distribution ( qkd ) .thereby , the problem is rather identified as a type of entanglement verification / detection and the formulations are deeply related with the entanglement witness . on the other hand , it might be more direct to demonstrate better - than - classical performance by introducing certain figure of merit when one shows the success of experiments .a familiar approach is to investigate the average fidelity of the process with respect to an ensemble of states . if one can find the upperbound of the average fidelity achieved by the m&p schemes , surpassing the bound is a sufficient condition of qd processes .the optimization problem of the average fidelity is also investigated in the state estimation and optimal cloning . aside from the quantum inseparability ,an assurance of genuine quantum devices could be that not only a set of orthogonal states but also a set of their supperpositions is coherently transferred .as in the sprit of the two - state qkd scheme , the coherence can be demonstrated by testing with two non - orthogonal states , and it would be important to construct an experimentally simpler verification scheme of qd processes as well as a solid foundation on the primary object .based on the transmission of binary coherent states and quadrature measurements , a verification scheme is developed in . a general approach that concerns the average fidelity for two non- orthogonal states is found in ref . . in this paper, we construct a simple verification scheme of qd processes using two non - orthogonal states as a variant of .the setup is as follows : a pair of pure states with non - zero overlap is prepared and experiences a physical process .suppose that the process converts the input states as , and the projection probabilities of the output onto the pair of target states , say and , are measured .we show the condition on and that ensures that the process is in qd .we derive the criterion in sec .ii and consider applications for quantum - optical experiments in sec .we make a conclusion in sec .any physical process is described by a completely positive trace - preserving ( cptp ) map .we define the average fidelity on a process with respect to the transformation task from a set of input states to a set of _ target _ states with a prior distribution by the process is simulated by the m&p schemes when we can write where is a positive - operator valued measure ( povm ) and is a density operator .the classical boundary of the average fidelity for the task is defined by the optimization over the m&p schemes : where denotes operator norm .we can verify the process is in qd if measured exceeds .note that the optimization problem reduces to the problem of the minimum error discrimination ( med ) when . in this casewe can see that for any cptp , and the orthogonal - target task is not useful to make the qd verification scheme .an interesting point is that the quantum correlation gains the score if the problem moves from the point of the med problem . with the non - orthogonality between the targetstates as a parameter we can work on a unified framework that includes the two widely investigated class of the problems : the state estimation and the med problem .the relation between the two problems was discussed in a different aspect .we start the two - state case by denoting and all the relations between the states are described by two parameters the upperbound can be obtained by following the discussion given by fuchs and sasaki where , however , the proof of the bound is somewhat complicated . here , we provide a different derivation of and the proof is quite simpler . by choosing the orthogonal basis of the target states ,we can write where we defined and then we have and let us choose the orthogonal basis of the input states , and define the pauli operators by .then , we can write where we defined since , we can choose the optimal povm so that without loss of generality . then , we can describe rank-1 povm element as a real vector in the bloch sphere with a single parameter , the condition of the povm , , implies using eqs .( [ form1]-[to1 ] ) we can rewrite eq .( [ fc1 ] ) as where ^ 2 .\nonumber \\ \end{aligned}\ ] ] in order to find an upperbound of , let us consider a three - dimensional loop and its tangent plane who has two points of tangency with and .if we define another loop on the plane with , \nonumber \\\end{aligned}\ ] ] and then we can directly verify that the latter loop is always above the former one , that is , \ge 0 . \end{aligned}\ ] ] with this inequality and eqs .( [ pz ] , [ px ] , [ fc2 ] ) , we obtain the upperbound is achievable by the povm with two elements , and , which form spectral decomposition of .therefore , we obtain where we introduced the key parameter that represents the `` total non - orthogonalty '' of the state transformation this quantity measures the non - orthogonality of the input states with respect to the non - orthogonal axes . when the target states are orthogonal reduces to and corresponds to the success probability of med for the two - state ensemble .it is worth noting that in the two - dimensional case the extreme eb map is _ classical - quantum _ ( cq ) map , that is , the measurement is orthogonal projection ( see , th . 5 ( d ) of ) .our approach here is in a sense to find the extreme point of eb maps .hence , the same result will be obtained by restricting the optimization over cq maps .another approach for the optimization problem is found in a different context .the optimization of over cptp maps is considered in .now we proceed to make the criterion for qd processes given the observed probabilities , and , where is the output of the channel corresponds to the input . with the expression of the classical boundary fidelity of eq .( [ ec ] ) , the problem is the existence of that satisfies .let us consider and as functions of ( see fig . [legendre1]a ) .then we can see that is satisfied if the segment that connects and is above the tangent line of whose slope is .the condition is where is the legendre transform of defined by , noting that is convex . in this case, we can obtain where is the solution of the equation . from elementary calculationwe obtain a simple qd condition in terms of the direct arithmetic mean , the slope , and the overlaps and , those are in defined by eq .( [ b ] ) : typical behavior of the boundary with respect to and for various is shown in fig .[ legendre1]b . andthe classical boundary as a function of the prior probability .( b)the classical - quantum boundary for the fidelities and specified by criterion ( [ cr ] ) is shown for , and .[ legendre1],width=325 ] this criterion provides a relation between the change of `` purity '' and the change of non - orthogonality in order that the process maintains the inseparability ( the fidelities give a lowerbound of the purity and operator norm of the output states such as and ) .certainly , the criterion is satisfied if both of the input states preserves the purity , i.e. , .moreover , it is known that a qubit channel is eb channel if it transforms a pure two - qubit entangled states into a separable state .hence , if our criterion is satisfied , we can fine a set of pure two - qubit entangled states whose inseparability survives after the local process . it might be valuable to consider the case where the input states are mixed states .suppose that the input mixed states are prepared by subjecting a cptp map on a pair of pure states .if the total process is in qd , it is clear that is in qd .then , we can use the criterion assuming the task . since the condition that there exists a cptp satisfying is , we obtain the criterion in the case of mixed input by replacing with the uhlmann fidelity .in quantum optical experiments , one of the most accessible state is the optical coherent state where is the displacement operator and is the vacuum state defined by . in many situations ,the ideal lossy channel is useful to describe the transmission of the coherent state as a first approximation .the ideal lossy channel with the transmission transforms the coherent state as .this evolution preserves the purity and the ideal lossy channel is clearly in qd .a natural question is the maintenance of coherence in the presence of excess noise . for the case of gaussian - distributed inputcoherent states on the phase - space , one can find qd criteria where the noise is measured in terms of quadrature variance or average fidelity . in order to apply our criterion fora lossy channel one may use the binary coherent state and choose .we can take without loss of generality .the experimental data and are directly measured by the photon detection after appropriate phase - space displacement .the threshold photon detector discriminates the more - than - one - photon states from the vacuum state , and the measurement statistics give the probability of the projection to the vacuum state .hence , the photon detection after the displacement gives the projection probability to the coherent states so that = \textrm{tr } \left [ \hat \rho \hat d(\alpha)|0 \rangle\langle 0| \hat d^\dagger ( \alpha ) \right ] = \langle \alpha | \hat \rho | \alpha \rangle ] . a feasible choice of the target states is squeezed vacuums connected with the rotation .we write the squeezing parameter and then the covariance matrices of the target is given by if the oputput is gaussian state , the fidelity to the target state is given by where we use eq .( [ fofg ] ) of appendix b and the inequality comes from the relation of the geometric - and - arithmetic means . here can be selected to obtain the upperbound so that , and the fidelities are estimated by using eq .( [ fofg ] ) again , we have + 1 } -xy + 1 } \nonumber \\ { \gamma ' } ^2 & = & |\langle \psi'_+| \psi'_- \rangle|^2= \frac{2}{\sqrt{2 + \frac{1}{2}\left [ ( x'+y')^2- ( x'-y')^2 \cos ( 2\theta ) \right ] } } \label{eq39}.\end{aligned}\ ] ] .the lhs of criterion ( [ cr ] ) is estimated from the degrees of squeezing ( antisqueezing ) for input states ( ) and for output states ( ) in experiments . the last two columns are the minimized value of the rhs of criterion ( [ cr ] ) with respect to the rotation angle and the value of the angle that achieves the minimum , .the criterion ( lhs)(rhs ) is not satisfied . [cols="^,^,^,^,^,^,^,^",options="header " , ] now we can directly evaluate the both sides of criterion ( [ cr ] ) for the experiments that investigate the degree of squeezing before - and - after the process . for the experiment demonstrated by honda _ ( method i ) , the degrees of squeezing and antisqueezing were ( -2db ) , ( 6db ) , ( -0.07db ) , ( 0.49db ) , and . with the help of eq .( [ eq39 ] ) , the rhs of ineq .( [ cr ] ) is a function of , , , and , and is minimized to when .the results of similar calculation for the experiments are summarized in table i. unfortunately , we have not found the result of the experiments where the process is supposed to have enough coherence to satisfy our criterion .note that the output - to - target fidelity of eq .( [ eq37 ] ) is for the gaussian states . in realistic ,it is not always reasonable to assume that the states are gaussian . in such case, we can use the lowerbounds estimated from the quadrature measurement given in appendix a. if we choose in eq .( [ a1 ] ) , the projection probability is lowerbounded by the observed quadrature noises as .hence , we can use instead of eq .( [ abxy ] ) provided .we have considered the average fidelity of the transformation task between two pairs of non - orthogonal pure states for a given quantum channel and derived a qd criterion the criterion takes simple form with a few experimental parameters and provides a relation between the change of `` purity '' and the change of non - orthogonality in order that the channel maintains the inseparability .the criterion can be applied for the case of mixed input states by using the uhlmann fidelity between the mixed inputs .we made a few examples of applications for quantum optical experiments .in particular , we showed how to apply our criterion for the experiments of storage or transmission of squeezed states . while the criterion provides a concrete foundation on the transfer of an unknown quantum state in relation with the non - orthogonality , it is likely that surpassing the classical boundary achievable by the classical m&p device requires higher fidelities and lower - noise experiments than the achievement of the present experiments .it will be valuable both in fundamentally and technologically to establish quantum channels that attain such a high - standard benchmark .the author thanks m. koashi and n. imoto for helpful discussions .is supported by jsps research fellowships for young scientists .the fidelity to a coherent state can be given by the photon detection followed by displacement as described in the sec .similarly the fidelity to a squeezed state can be given by the probability of photon detection after certain displacement and squeezing .while the former can be realized within standard technique of linear optics , the latter requires squeezing operation . in this appendixwe provide a method for estimating the fidelity to a squeezed state with linear optics and homodyne detection .let us write the photon number operator and squeezing operator with degree of squeezing whose action to the quadrature operator is given by .we define a squeezed photon number operator by . using the spectra decomposition of , we can see that for any normalized state .the inequality comes from .hence , we have this provides a lower bound of the fidelity to the squeezed vacuum state from the quadrature moments determined by homodyne measurements , and . by taking proper displacement on beforehand, we obtain an estimation of the fidelity to any pure quadrature - squeezed state . as a function of , the rhs of eq .( [ a1 ] ) is maximized when .this provides the choice of the target state in the last part of sec .the covariance matrix for density operator is defined by {i , j } = 4\left\ { \frac{\textrm{tr}[\hat \rho ( \hat x_j \hat x_k + \hat x_k \hat x_j ) ] } { 2 } -\textrm{tr}(\hat \rho \hatx_j ) \textrm{tr}(\hat \rho \hat x_k ) \right\ } $ ] with .the uhlmann fidelity between gaussian states and is given by , % \nonumber , \\\label{fofg}\end{aligned}\ ] ] where k. honda , d. akamatsu , m. arikawa , y. yokoi , k. akiba , s. nagatsuka , t. tanimura , a. furusawa , and m. kozuma , * 100 , * 093601 ( 2008 ) .j. appel , e. figueroa , d. korystov , and , a. i. lvovsky , * 100 , * 093602 ( 2008 ) .h. yonezawa , s.l .braunstein , and a. furusawa , * 99 , * 110503 ( 2007 ) .
if a quantum channel or process can not be described by any measure - and - prepare scheme , we may say the channel is in _ quantum domain _ ( qd ) since it can transmit quantum correlations . the concept of qd clarifies the role of quantum channel in quantum information theory based on the local - operation - and - classical - communication ( locc ) paradigm : the quantum channel is only useful if it can not be simulated by locc . we construct a simple scheme to verify that a given physical process or channel is in qd by using two non - orthogonal states . we also consider the application for the experiments such as the transmission or storage of quantum optical coherent states , single - photon polarization states , and squeezed vacuum states .
in biological and engineering sciences study of length of organisms , devices and materials is of major important .a substantial part of such study is devoted to modeling the lifetime data by a failure distribution .the weibull and ew distributions are the most commonly used distributions in reliability and life testing .these distributions have several desirable properties and nice physical interpretations . unfortunately , however, these distributions do not provide a reasonable parametric fit for some practical applications where the underlying hazard functions may be decreasing , unimodal and bathtub - shaped .recently , there has been a great interest among statisticians and applied researchers in constructing flexible families of distributions to facilitate better modeling of data .the exponential - geometric ( eg ) , exponential - poisson ( ep ) , exponential - logarithmic ( el ) , exponential - power series ( eps ) , weibull - geometric ( wg ) , weibull - power series ( wps ) , exponentiated exponential - poisson ( eep ) , complementary exponential - geometric ( ceg ) , poisson - exponential ( pe ) , generalized exponential - power series ( geps ) , exponentiated weibull - geometric ( ewg ) and exponentiated weibull - poisson ( ewp ) distributions were introduced and studied by adamidis and loukas , kus , tahmasbi and rezaei , chahkandi and ganjali , barreto - souza et al . and morais and barreto - souza , barreto - souza and cribari - neto , louzada - neto et al . , cancho et al . , mahmoudi and jafari , mahmoudi and shiran and mahmoudi and sepahdar , respectively . in this paper, we propose a new four - parameters distribution , referred to as the ewl distribution , which contains as special sub - models the generalized exponential - logarithmic ( gel ) , complementary weibull - logarithmic ( cwl ) , complementary exponential - logarithmic ( cel ) , exponentiated rayleigh - logarithmic ( erl ) and rayleigh - logarithmic ( rl ) distributions , among others .the paper is organized as follows : in section 2 , a new lifetime distribution , called the exponentiated weibull - logarithmic ( ewl ) distribution , is obtained by mixing exponentiated weibull and logarithmic distributions .various properties of the proposed distribution are discussed in section 3 .estimation of the parameters by maximum likelihood via a em - algorithm and inference for large sample are presented in section 4 . in section 5 , we studied some special sub - models of the ewl distribution . finally , in section 6 , experimental results of the proposed distribution , based on two real data sets , are illustrated .suppose that the random variable has the ew distribution where its cdf and pdf are given by and respectively , where , , and .given , let be independent and identify distributed random variables from ew distribution .let is distributed according to the logarithmic distribution with pdf let , then the conditional cdf of is given by ^{n \alpha},\ ] ] which is a ew distribution with parameters , , .the ewl distribution that is defined by the marginal cdf of , is given by }{\log \big ( 1-\theta \big)}.\ ] ] the pdf of ewl , denoted by ewl , is given by the graphs of ewl probability density function are displayed in fig . 1 for selected parameter values . and different values , , and .,title="fig : " ] and different values , , and .,title="fig : " ] and different values , , and .,title="fig : " ] and different values , , and .,title="fig : " ] and different values , , and .,title="fig : " ] and different values , , and .,title="fig : " ] and different values , , and .,title="fig : " ] and different values , , and .,title="fig : " ] and different values , , and .,title="fig : " ]the probability density function of ewl distribution , which is given in ( [ pdf ewl ] ) , tending to zero as .the ewl leads to ew distribution as .the following values of the parameters , , and are of particular interest : ( i ) , ewl reduces to gel , which introduced and examined by mahmoudi and jafari ; ( ii ) , ewl reduces to cwl ; ( iii ) , ewl reduces to cel , our approach here is complementary to that of tahmasbi and rezaei in the sense that they consider the distribution while we deal with ; ( iv ) and , ewl reduces to weibull ; ( v ) and , ewl reduces to ge .the quantile of the ewl distribution , which is used for data generation from the ewl , is given by ^{1/\gamma}.\ ] ] now we obtain the moment generating function ( mgf ) and order moment of ewl distribution , since some of the most important features and characteristics such as tending , dispersion , skewness and kurtosis can be studied through these quantities . for a random variable with ewl distributionthe mgf is given by {ll}\label{mgf ewl } m_{y}(t)=\frac{\alpha\theta}{\log(1-\theta)}\sum^{\infty}_{n=1}\sum^{\infty}_{k=0}\sum^{\infty}_{j=0}(-1)^{j+1}\frac{t^{k } \theta^{n-1}}{\beta^{k}k ! } \gamma(1+\frac{k}{\gamma}){n\alpha-1\choose j}(j+1)^{-(1+\frac{k}{\gamma})}. \end{array}\ ] ] in ( [ mgf ewl ] ) can be used to obtain the order moment of ewl distribution .we have {ll } e(y^{k})=\frac{\alpha\theta\gamma(1+\frac{k}{\gamma})}{\beta^{k}\log(1-\theta)}\sum^{\infty}_{n=1}\sum^{\infty}_{j=0}(-1)^{j+1}{n\alpha-1 \choose j}\theta^{n-1 } ( j+1)^{-(1+\frac{k}{\gamma})}. \end{array}\ ] ] the random variable with pdf given by ( [ pdf ewl ] ) has mean and variance given , respectively , by and {ll } var(y)=\frac{\alpha\theta\gamma ( 1+\frac{2}{\gamma } ) } { \beta^ { 2}\log(1-\theta ) } \sum^{\infty}_{n=1}\sum^{\infty}_{j=0 } ( - 1)^{j+1 } \theta^{n-1 } { n\alpha - 1 \choose j } ( j+1)^{-(1+\frac{2}{\gamma})}-e^2(y ) , \end{array}\ ] ] where is given in eq .( [ mean ewl ] ) . using ( [ cdf ewl ] ) and ( [ pdf ewl ] ) , survival function ( also known as reliability function ) and hazard function ( also known as failure rate function ) of the ewl distributionare given , respectively , by and }\ ] ] the limiting behavior of hazard function of ewl distribution in ( [ hazard ] ) is + ( i ) for , and + ( ii ) for , and + ( iii ) for , for each value and .the proof is a forward calculation and is omitted .the graphs of hazard rate function of ewl distribution for and various values , and are displayed in fig .2 . and different values , , and .,title="fig : " ] and different values , , and .,title="fig : " ] and different values , , and .,title="fig : " ] and different values , , and .,title="fig : " ] and different values , , and .,title="fig : " ] and different values , , and .,title="fig : " ] and different values , , and .,title="fig : " ] and different values , , and .,title="fig : " ] and different values , , and .,title="fig : " ] given that a component survives up to time , the residual life is the period beyond until the time of failure and defined by the conditional random variable .the mean residual life ( mrl ) function is an important function in survival analysis , actuarial science , economics and other social sciences and reliability for characterizing lifetime . in reliability, it is well known that the mrl function and ratio of two consecutive moments of residual life determine the distribution uniquely ( gupta and gupta , ) . in what seen this onwards we use the equations and where is the upper incomplete gamma function and is the lower incomplete gamma function .the order moment of the residual life of the ewl distribution , which is obtain via the general formula =\frac{1}{s(t)}\int_{t}^{\infty}(y - t)^{r}f(y)dy,\ ] ] where is the survival function , is given by {ll } m_{r}(t)&=\frac{\alpha \theta }{s(t ) \log(1-\theta)}\sum_{i=0}^{r}\sum_{j=0}^{\infty} \sum_{k=0}^{\infty } (-1)^{r+k - i+1 } t^{r - i } \theta^{j } \beta^{-i } ( k+1)^{-(1+\frac{i}{\gamma} ) } { { r}\choose{i}}{{\alpha(j+1)-1}\choose{k}}\medskip\\ & ~~\times \phi(1+\frac{i}{\gamma } ; ( k+1 ) ( \beta t)^{\gamma } ) , \end{array}\ ] ] where ( the survival function of ) is given in ( [ survive ] ) .+ the mrl function of ewl obtain by setting in eq .( [ r res ] ) .mrl function as well as failure rate function is very important since each of them can be used to determine a unique corresponding life time distribution .life times can exhibit imrl ( increasing mrl ) or dmrl ( decreasing mrl ) .mrl functions that first decreases ( increases ) and then increases ( decreases ) are usually called bathtub ( upside - down bathtub ) shaped , bmrl ( umrl ) .the mrl function of the ewl distribution is given by {ll } m_{1}(t)&=\frac{\alpha \theta}{s(t ) \beta\log(1-\theta)}\sum_{j=0}^{\infty}\sum_{k=0}^{\infty}(-1)^{k+1 } \theta ^j { { \alpha(j+1)-1}\choose{k } } ( k+1)^{-(1+\frac{1}{\gamma})}\medskip\\ & ~~\times\phi(1+\frac{1}{\gamma} ; ( k+1 ) ( \beta t)^{\gamma } ) -t .\end{array}\ ] ] the variance of the residual life of the ewl distribution can be obtained easily using and .+ on the other hand , the reversed residual life can be defined as the conditional random variable which denotes the time elapsed from the failure of a component given that its life is less than or equal to t. this random variable may also be called the inactivity time ( or time since failure ) ; for more details one can see kundu and nanda ( 2010 ) and nanda et al . , (2003 ) . using ( [ pdf ewl ] ) and( [ survive ] ) , the reversed failure ( or reversed hazard ) rate function of the ewl is given by } .\ ] ] the order moment of the reversed residual life can be obtained by the well known formula =\frac{1}{f(t)}\int_{0}^{t}(t - y)^{r}f(y)dy,\ ] ] hence , {ll } \mu_{r}(t)&=\frac{\alpha \theta }{f(t ) \log ( 1-\theta ) } \sum_{i=0}^{r}\sum_{j=0}^{\infty}\sum_{k=0}^{\infty } \frac{(-1)^{i+k+1 } \theta^{j } t^{r - i}}{\beta^{i}(k+1)^{1+\frac{i}{\gamma} } } { { r}\choose{i } } { { ( j+1)\alpha -1}\choose{k } } \gamma(1+\frac{i}{\gamma } ; ( k+1)(\beta t)^{\gamma } ) .\end{array}\ ] ] the mean and second moment of the reversed residual life of the ewl distribution can be obtained by setting in ( [ rev residual ] ) .also , using and we obtain the variance of the reversed residual life of the ewl distribution. the amount of scatter in a population can be measured by the totality of deviations from the mean and median .the mean deviation from the mean is a robust statistic , being more resilient to outliers in a data set than the standard deviation .the mean deviation from the median is a measure of statistical dispersion .it is a more robust estimator of scale than the sample variance or standard deviation . for a random variable with pdf ,cdf , mean and , the mean deviation about the mean and the mean deviation about the median are defined by and respectively , where the bonferroni and lorenz curves and gini index have many applications not only in economics to study income and poverty , but also in other fields like reliability , medicine and insurance .the most remarkable property of the bonferroni index is that it overweights income transfers among the poor , and the weights are higher the lower the transfers occur on the income distribution .hence , it is a good measure of inequality when changes in the living standards of the poor are concerned .there are many problems especially in labor economics that fall into this category . using a version of the assignment model ,we show that the bonferroni index can be formulated endogenously within a mechanism featuring efficient assignment of workers to firms .this formulation is useful in evaluating the interactions between the distribution of skills and earnings inequality with a special emphasis on the lower tail of the earnings distribution .moreover , it allows us to think about earnings inequality by separately analyzing the contribution of each economic parameter .entropy has been used in various situations in science and engineering .the entropy of a random variable is a measure of variation of the uncertainty .statistical entropy is a probabilistic measure of uncertainty or ignorance about the outcome of a random experiment , and is a measure of a reduction in that uncertainty .numerous entropy and information indices , among them the rnyi entropy , have been developed and used in various disciplines and contexts . for a random variable with the pdf , the rnyi entropy is defined by ,\ ] ] for and . using the power series expansion and change of variable , we have ^{r } ( \gamma\beta)^{r-1}\sum^{\infty}_{j=0}\sum^{\infty}_{k=0}(-1)^{k+r}{\alpha(r+j ) -r\choose{k}}\frac{\theta^{j}\gamma(r+j)}{j ! \gamma(r)}\frac{\gamma\left(\frac{r(\gamma-1)+1}{\gamma}\right)}{(k+r)^{\frac{r(\gamma-1)+1}{\gamma}}}. \ ] ] thus , according to definition ( [ entropy ] ) , the rnyi entropy of ewl distribution is given by {ll } i_{r}(r)&=\frac{1}{1-r}\log \big[\left(\frac{\alpha\theta}{\log(1-\theta)}\right)^{r } ( \gamma\beta)^{r-1}\sum^{\infty}_{j=0}\sum^{\infty}_{k=0}(-1)^{k+r}{\alpha(r+j ) -r\choose{k}}\frac{\theta^{j}\gamma(r+j)}{j ! \gamma(r)}\medskip\\ & ~~\times\frac{\gamma\left(\frac{r(\gamma-1)+1}{\gamma}\right)}{(k+r)^{\frac{r(\gamma-1)+1}{\gamma}}}\big ] .\end{array}\ ] ] the shannon entropy is defined by ] . under conditions that are fulfilled for parameters in the interior of the parameter space but not on the boundary ,the asymptotic distribution of is , where is the unit information matrix .this asymptotic behavior remains valid if is replaced by the average sample information matrix evaluated at , say .the estimated asymptotic multivariate normal distribution of can be used to construct approximate confidence intervals for the parameters and for the hazard rate and survival functions .an asymptotic confidence interval for each parameter is given by where is the ( _ r , r _ ) diagonal element of for and is the quantile of the standard normal distribution . the likelihood ratio ( lr ) statistic is useful for comparing the ewl distribution with some of its special sub - models .we consider the partition of the ewl distribution , where is a subset of parameters of interest and is a subset of the remaining parameters .the lr statistic for testing the null hypothesis versus the alternative hypothesis is given by , where and are the mles under the null and alternative hypotheses , respectively .the statistic is asymptotically ( as ) distributed as , where is the dimension of the subset of interest .the mles of the parameters , , and in previous section must be derived numerically .newton - raphson algorithm is one of the standard methods to determine the mles of the parameters . to employ the algorithm , second derivatives of the log - likelihoodare required for all iterations .the em algorithm is a very powerful tool in handling the incomplete data problem ( dempster et al . , ; mclachlan and krishnan , ) .+ let the complete - data be with observed values and the hypothetical random variable .the joint probability density function is such that the marginal density of is the likelihood of interest .then , we define a hypothetical complete - data distribution for each with a joint probability density function in the form where , and . + under the formulation , the e - step of an em cycle requires the expectation of where is the current estimate ( in the iteration ) of .+ the pdf of given , say is given by and its expected value is {ll } e[z|y = y]&= \big(1-\theta ( 1-e^{-(\beta y)^{\gamma}})^{\alpha} \big)^{-1}. \end{array}\ ] ] the em cycle is completed with the m - step by using the maximum likelihood estimation over , with the missing s replaced by their conditional expectations given above .+ the log - likelihood for the complete - data is the components of the score function are given by ,\medskip\\ \frac{\partial l^{*}_{n}}{\partial\gamma}&=\frac{n}{\gamma} +n \log\beta + \sum^{n}_{i=1}\log y_i - \sum^{n}_{i=1 } ( \beta y_i)^{\gamma } \log(\beta y_i)+ \sum^{n}_{i=1}(z_i \alpha -1 ) \frac{(\beta y_i)^{\gamma } \log(\beta y_i ) e^{- ( \betay_i)^{\gamma}}}{1-e^{-(\beta y_i)^{\gamma}}} , \medskip\\ \frac{\partial l^{*}_{n}}{\partial\theta}&=\frac{n}{1-\theta}-\frac{\sum^{n}_{i=1 } z_i}{\theta } . \end{array}\ ] ] from a nonlinear system of equations , we obtain the iterative procedure of the em algorithm as where , and are found numerically . hence , for , we have ^{-1}.\ ] ]the ewl distribution contains some sub - models for the special values of parameters , and .some of these distributions are discussed here in details .the cwl distribution is a special case of the ewl distribution for .the pdf , cdf and hazard rate function of the cwl distribution are given , respectively by and }.\ ] ] according to eq .( [ mean ewl ] ) the mean of the cwl distribution is given by one can obtain the weibull distribution from the cwl distribution by taking to be close to zero , i.e. , the gel distribution is a special case of ewl distribution , obtain by putting .this distribution is introduced and analyzed by mahmoudi and jafari .the pdf , cdf and hazard rate function of the gel distribution are given , respectively by }{\log(1-\theta)},\ ] ] and }.\ ] ] according to eq .( [ mean ewl ] ) , the mean of the gel distribution is given by the cel distribution is a special case of the ewl distribution for .our approach here is complementary to that of tahmasbi and rezaei in the sense that they consider the distribution while we deal with .the pdf , cdf and hazard rate function of the cel distribution are given , respectively by and }.\ ] ] according to eq .( [ mean ewl ] ) , the mean of cel distribution is given by show the superiority of the ewg distribution , we compare the results of fitting this distribution to some of theirs sub - models such as wg , ew , ge and weibull distributions , using two real data sets .the required numerical evaluations are implemented using the r softwares .the empirical scaled ttt transform ( aarset , ) and kaplan - meier curve can be used to identify the shape of the hazard function .the first data set is given by birnbaum and saunders ( 1969 ) on the fatigue life of 6061-t6 aluminum coupons cut parallel with the direction of rolling and oscillated at 18 cycles per second .the data set consists of 101 observations with maximum stress per cycle 31,000 psi . the ttt plot and kaplan - meier curve for two series data in fig .3 shows an increasing hazard rate function and , therefore , indicates that appropriateness of the ewg distribution to fit these data .table 1 lists the mles of the parameters , the values of k - s ( kolmogorov - smirnov ) statistic with its respective _p_-value , -2log(l ) , aic ( akaike information criterion ) , ad ( anderson - darling statistic ) and cm ( cramer - von mises statistic ) for the first data .these values show that the ewg distribution provide a better fit than the wg , ew , ge and weibull for fitting the first data .we apply the arderson - darling ( ad ) and cramer - von mises ( cm ) statistics , in order to verify which distribution fits better to this data . the ad and cm test statistics are described in details in chen and balakrishnan . in general , the smaller the values of ad and cm , the better the fit to the data . according to these statistics in table 1 , the ewg distributionfit the first data set better than the others ..mles(stds . ) , k - s statistics , _ p_-values , and aic for the strengths of 1.5 cm glass fibres .[ cols="<,<,^,^,^,^,^,^",options="header " , ] using the likelihood ratio ( lr ) test , we test the null hypothesis h0 : wg versus the alternative hypothesis h1 : ewg , or equivalently , h0 : versus h1 : . the value of the lr test statistic and the corresponding _ p_-value are 16.713 and 4.34e-05 , respectively . therefore , the null hypothesis ( wg model ) is rejected in favor of the alternative hypothesis ( ewg model ) for a significance level 4.34e-05 . for test the null hypothesis h0 : ge versus the alternative hypothesis h1 : ewg , or equivalently , h0 : versus h1 : , the value of the lr test statistic is 14.149 ( _ p_-value = 0.0008 ) , which includes that the null hypothesis ( ge model ) is rejected in favor of the alternative hypothesis ( ewg model ) for a significance level 0.0008 . for test the null hypothesis h0 : weibull versus the alternative hypothesis h1 : ewg , or equivalently , h0 : versus h1 : .the value of the lr test statistic is 15.247 ( _ p_-value = 0.0005 ) , which includes that the null hypothesis ( weibull model ) is rejected in favor of the alternative hypothesis ( ewg model ) for any significance level .we also test the null hypothesis h0 : ew versus the alternative hypothesis h1 : ewg , or equivalently , h0 : versus h1 : , the value of the lr test statistic is 1.713 ( _ p_-value = 0.1905 ) , which includes that the null hypothesis ( ew model ) is rejected in favor of the alternative hypothesis ( ewg model ) for a significance level 0.1905 , and for any significance level 0.1905 , the null hypothesis is not rejected but the values of ad and cm in table 2 show that the ewg distribution gives the better fit to the second data set than ew distribution .+ plots of the estimated cdf and pdf function of the ewg , wg , ew , ge and weibull models fitted to these two data sets corresponding to tables 1 and 2 are given in fig .these plots suggest that the ewg distribution is superior to the wg , ew , ge and weibull distributions in fitting these two data sets .we propose a new four - parameter distribution , referred to as the ewg distribution which contains as special sub - models the generalized exponential - geometric ( geg ) , complementary weibull - geometric ( cwg ) , complementary exponential - geometric ( ceg ) , exponentiated rayleigh - geometric ( erg ) and rayleigh - geometric ( rg ) distributions .the hazard function of the ewg distribution can be decreasing , increasing , bathtub - shaped and unimodal .several properties of the ewg distribution such as quantiles and moments , maximum likelihood estimation procedure via an em - algorithm , rnyi and shannon entropies , moments of order statistics , residual life function and probability weighted moments are studied . finally , we fitted ewg model to two real data sets to show the potential of the new proposed distribution .badar , m.g . and priest ,a.m. ( 1982 ) , statistical aspects of eber and bundle strength in hybrid composites " , progress in science and engineering composites , hayashi , t. , kawata , k. and umekawa , s. ( eds . ) , iccm - iv , tokyo , 1129 - 1136 .barlow , r.h .toland , t. freeman , a bayesian analysis of stress - rupture life of kevlar 49/epoxy spherical pressure vessels , in : proceeding of canadian conference in applied statistics , marcel dekker , new york , 1984 .a. choudhury , a simple derivation of moments of the exponentiated weibull distribution , metrika 62 ( 2005 ) 17 - 22 .dempster , n.m .laird , d.b .rubin , maximum likelihood from incomplete data via the em algorithm ( with discussion ) , journal of royal statistical socity ser .b 39 ( 1977 ) 1 - 38 . j.a .greenwood , j.m .landwehr , n.c .matalas , j.r .wallis , probability weighted moments : definition and relation to parameters of several distribution exprensible in inverse form , water resources research 15 ( 1979 ) 1049 - 1054. f. louzada - neto , m. roman , v.g .cancho , the complementary exponential geometric distribution : model , properties , and comparison with its counter part , computational statistics and data analysis 55 ( 2011 ) 2516 - 2524 .mudholkar , a.d .hustson , the exponentiated weibull family : some properties and a flood data application , communications in statistics - theory and methods 25 ( 1996 ) 3059 - 3083 .nanda , h. singh , n. misra , p. paul , reliability properties of reversed residual lifetime , communications in statistics - theory and methods 32 ( 2003 ) 2031 - 2042 .
in this paper , we introduce a new four - parameter generalization of the exponentiated weibull ( ew ) distribution , called the exponentiated weibull - logarithmic ( ewl ) distribution , which obtained by compounding ew and logarithmic distributions . the new distribution arises on a latent complementary risks scenario , in which the lifetime associated with a particular risk is not observable ; rather , we observe only the maximum lifetime value among all risks . the distribution exhibits decreasing , increasing , unimodal and bathtub - shaped hazard rate functions , depending on its parameters and contains several lifetime sub - models such as : generalized exponential - logarithmic ( gel ) , complementary weibull - logarithmic ( cwl ) , complementary exponential - logarithmic ( cel ) , exponentiated rayleigh - logarithmic ( erl ) and rayleigh - logarithmic ( rl ) distributions . we study various properties of the new distribution and provide numerical examples to show the flexibility and potentiality of the model . em - algorithm , exponentiated weibull distribution , maximum likelihood estimation , logarithmic distribution , probability weighted moments , residual life function . 60e05 , 62f10 , 62p99
( fd ) enables a node to receive and transmit information over the same frequency simultaneously .compared with half - duplex ( hd ) , fd can potentially enhance the system spectral efficiency due to its efficient bandwidth utilization .however , its performance is affected by the self interference caused by signal leakage in fd radios . the self interference can be suppressed by using digital - domain , analog - domain and propagation - domain methods .however , the residual interference still exists due to imperfect cancellation . recently, fd technique has been deployed into relay networks .the capacity trade off between fd and hd in a two hop af relay system is studied , where the source - relay and the self interference channels are modeled as non - fading channels .the two - hop fd decode - and - forward ( df ) relay system was analyzed in terms of the outage event , and the conditions that fd relay is better than hd in terms of outage probability were derived in .the work in analyzed the outage performance of an optimal relay selection scheme with dynamic fd / hd switching based on the global channel state information ( csi ) . in , the authors analyzed the multiple fd relay networks with joint antenna - relay selection and achieved an additional spatial diversity than the conventional relay selection scheme .though fd has the potential to achieve higher spectrum efficiency than hd , hd outperforms fd in the strong self interference region .the work in proposed the hybrid fd / hd switching and optimized the instantaneous and average spectral efficiency in a two - antenna infrastructure relay system .for the instantaneous performance , the optimization is studied in the case of static channels during one instantaneous snapshot within channel coherence time and the distribution of self interference is not considered . for the average performance ,the self interference channel is modelled as static .the outage probability and ergodic capacity for two - way fd af relay channels were investigated while the self interference channels are simplified as additive white gaussian noise channels in . in practical systems ,the residual self interference can be modeled as the rayleigh distribution due to multipath effect . in this case , the analysis becomes a non - trivial task . in this paper , we consider a fd relay system consisting of one source node , one af relay node and one destination node .different from existing works on fd relay with predefined rx and tx antennas , in our paper , the relay node is equipped with an adaptively configured shared antenna , which can be configured to operate in either transmission or reception mode .the shared antenna deployment can use the antenna resources more efficiently compared with separated antenna as only one antenna set is adopted for both transmission and reception simultaneously .one shared - antenna is more suitable to be deployed into small equipments , such as mobile phone , small sensor nodes , which is essentially different from separated antennas in terms of implementation .the relay can select between fd and hd modes to maximize the sum rate by configuring the relay node with a shared antenna based on the instantaneous channel conditions .we refer to this kind of relay as a x - duplex relay .first , the asymptotic cdf of the received signal at the destination of the x - duplex relay system is calculated , then , the asymptotic expressions of outage probability , average ser and average sum rate are derived and validated by monte - carlo simulations .we show that the x - duplex relay can achieve a better performance compared with pure fd and hd modes and can completely remove the error floor due to the residual self interference in fd systems . to further improve the system performance ,a x - duplex relay with adaptive power allocation ( xd - pa ) is investigated where the transmit power of the source and relay can be adjusted to minimize the overall ser subject to the total power constraint .the end - to - end sinr expression is calculated and a lower bound and a upper bound are provided .the diversity order of xd - pa is between one and two .the main contributions of this paper are listed as follows : \1 ) the x - duplex relay with a shared antenna is investigated in a single relaying network , which can increase the average sum rate .\2 ) taking the residual self interference into consideration , the cdf expression of end - to - end sinr of the x - duplex relay system is derived .\3 ) the asymptotic expressions of outage probability , average ser and average sum rate are derived based on the cdf expression and validated by simulations .\4 ) adaptive power allocation is introduced to further enhance the system performance of the x - duplex relay system . a lower bound and an upper bound of the outage probability of xd - pa are derived and the diversity order of xd - pa is analyzed .the remainder of this paper is organized as follows : in section , we introduce the system model and x - duplex relay . in section , the outage probability , the average ser and the average sum rate of the x - duplex relay system are derived and a lower bound and a upper bound of the end - to - end sinr of xd - pa are provided .simulation results are presented in section .we draw the conclusion in section .as shown in fig . 1 , we consider a system which consists of one source node ( s ) , one destination node ( d ) , and one af relay node ( r ) .we assume the direct link from s to d is strongly attenuated and information can only be forwarded through the relay node . in this network ,all nodes operate in the same frequency and each of them is equipped with one antenna .node r is equipped with one transmit ( tx ) and one receive ( rx ) rf chains which can receive and transmit signal over the same frequency simultaneously . in the x - duplex relay, node r can adaptively switch between the fd and hd modes according to the residual self interference between the two rf chains of the relay node and the instantaneous channel snrs between the source / destination node and relay node . in this paper , all the links are considered as block rayleigh fading channels .we assume the channels remain unchanged in one time slot and vary independently from one slot to another .the derivation of end - to - end sinr of fd and hd mode is similar to the discussions in the earlier works .0.5 0.5 in the fd mode , both rx and tx chains at node r are active at the same time . the signal received at node ris given as where denotes the channel between source and relay , is the residual self interference of relay r. and denote the transmit signal of the source and relay . and are the transmit powers of the source and relay node . is the zero - mean - value additive white gaussian noise with the power .af protocol is adopted at relay r and the forwarding signal at the relay r can be written as where denotes the power amplification factor satisfying = { \beta _ f}^2(|{h_1}{|^2}{p_s } + | { h_{ri}}{|^2}{p_r } + { \sigma ^2 } ) \le 1,\ ] ] where the received signal at the destination d is given by where denotes the channel between relay and destination , and is the zero - mean - value additive white gaussian noise with power . the end - to - end sinr of fd mode can be expressed as using ( [ betaf ] ) , the sinr can be further simplified as where ,, denote the respective channel snrs and . in the hd mode ,the relay r receives the signal from the source at the first half of a time slot , and it is given by at second half of a time slot , relay r transmits the received signal to the destination d with af protocol .the received signal at destination d is given by where is the amplification factor .under transmit power constraint at relay r , can be expressed as at destination d , the end - to - end sinr is thus given by the instantaneous snrs , are modeled as the exponential random variable with respective means and . in the x - duplex relay system ,the self interference at relay is mitigated with effective self interference cancellation techniques .the residual self interference at relay is assumed to follow the rayleigh distribution . at the relay r, the snr of residual self interference follows the exponential distribution with mean value .the residual self interference level is denoted as .as the source signal might behave as interference to the self interference cancellation in active self interference cancellation schemes , the value of might vary with .if only passive cancellation is applied , might be independent to . in this paper , is merely used to denote the ratio of the average power of residual self interference and received signal at relay , and is not assumed to be constant .the hd mode outperforms the fd mode in the severe self interference region . to optimize the system performance , we consider a x - duplex relay which can be reduced to either fd or hd with different rf chain configurations based on the instantaneous sinr .the csi of the self interference can be measured by sufficient training .the csi of and can be obtained through pilot - based channel estimation .we also assume that reliable feedback channels are deployed , therefore the csis can be transmitted to the decision node .the system s average sum rate under fd and hd modes can be expressed as where , denotes the sinr of the fd and hd modes , respectively . to maximize the instantaneous sum rate ,the instantaneous sinr of x - duplex relay can be given by in order to further optimise the system performance , we introduce the adaptive power allocation ( pa ) in the x - duplex relay to maximize the relay system s end - to - end sinr subject to the total transmit power constraint , .the optimal pa scheme for fd mode and hd mode based on the instantaneous csis is given by based on ( [ pafdhd ] ) , the respective end - to - end sinr of fd and hd modes with pa are derived as therefore , the instantaneous sinr of x - duplex relay with pa can be given by this section , we present the cdf of the x - duplex relay and analyze the performance of the x - duplex system , including the outage probability , ser and the average sum rate .the derived expressions of performance of x - duplex with one shared antenna are essentially equivalent to the conventional system with two separated antennas [ 21].s [ lemmafdcdf ] the asymptotic complementary cdf of is given by where , , , are the first and zero order bessel function of the second kind .the derivation is presented in appendix a. [ lemmahdcdf ] the complementary cdf of is given by where .the hd mode s end - to - end sinr is given in ( 12 ) , with the help of ( * ? ? ?* eq.(3.324.1 ) ) , ( [ hdcdf ] ) can be obtained .[ lemmahybridcdf ] the asymptotic probability of can be obtained as where , are expressed as where , .the derivation is presented in appendix b. the asymptotic cdf of x - duplex relay system s sinr can be derived as \nonumber\\ & & + \frac{{2\eta ( { x^2 } + x)}}{{{\lambda _ 2}{p_r}{{(1 + \eta x)}^2}}}\left[ { { k_0}({\beta _ 1}){e^ { - cx } } - { k_0}({\beta _ 3}){e^ { - { \beta _ 4 } } } } \right ] , \vspace{-2mm}\end{aligned}\ ] ] where , , , , , , , are the first and zero order bessel function of the second kind . according to the permutation theorem ,the cdf expression can be obtained as with the help of lemma [ lemmafdcdf ] , lemma [ lemmahdcdf ] , lemma [ lemmahybridcdf ] , ( [ cdf ] ) is derived .the outage probability can be given as where the threshold of the outage probability is set to ensure the transmit rate over bps / hz , and is cdf of the end - to - end sinr .the x - duplex relay configures the antenna to provide the maximum sum rate of the relay network . with the cdf expression in ( [ cdf ] ) and ( [ outage ] ), the outage probability of the x - duplex relay system can be derived . from lemma [ lemmafdcdf ] and( [ outage ] ) , the outage probability of the fd mode can be obtained . according to (* eq.(10.30 ) ) , in the high snr condition , when comes close to zero , the function converges to , and the value of is comparatively small .therefore , in the high snr scenarios , the fd mode s outage probability is approximately given by when the snr goes infinite , the outage probability of fd mode will approach therefore , the outage probability of fd mode is limited by the error floor which is caused by self interference at high snr . by substituting ( [ cdf ] ) into ( [ outage ] ), the outage probability of x - duplex relay system can be obtained . in the high snr, the outage probability can be derived using the similar approximation in ( [ fdoutage ] ) , when the snr goes infinite , the outage probability of x - duplex relay system approaches to zero , indicating that there is no performance floor for x - duplex relay system in the high snr region .for the x - duplex relay system , the finite diversity order of snr is provided by where is the system s outage probability at average snr .we use this equation to calculate the diversity order of x - duplex relay system .we assume the transmit power of the source and relay is the same under fixed power allocation condition , .the diversity order of the x - duplex relay system is given as {e^ { - { \beta _ 4}}}}}{{1 - \frac{1}{{1 + \eta x}}{e^ { - cx } } - \frac{{\eta x}}{{1 + \eta x}}{e^ { - { \beta _ 4}}}}}.\end{aligned}\ ] ] furthermore , the diversity order can be estimated by using the taylor s formula in ( * ? ? ?* eq.(1.211 ) ) in the high transmit power scenario }}{{{c_1}x + \eta x{c_1}({x^2 } + 2x ) + \eta x\frac{{x + 1}}{{\eta { \lambda _ 1}}}}},\ ] ] where .when the transmit power goes infinite , the diversity order of x - duplex relay system approaches to one , indicating that there is no error floor in the system . for the hd mode , from equation ( [ rate ] ) ,the hd mode s equivalent sinr in one time slot is given as .therefore , the outage probability of hd mode can be obtained with ( [ hdcdf ] ) the finite - snr diversity orders of fd and hd mode can be written as at medium snr and low residual self interference , the diversity order of fd can be approximated as . with optimal self interference cancellation , approaches one in high snr region .when the snr goes infinite , the diversity order of the fd and hd mode approaches to zero and one respectively , indicating that the outage probability curve of x - duplex relay system is parallel with hd mode when snr reaches this region .the outage probability intersection of fd and hd mode can be calculated as when , the outage probability of fd is lower than hd .the intersection point is affected by self interference level .when reaches zero , the intersection point goes infinite , indicating that fd outperforms hd in all snr circumstances with ideal self interference cancellation . for linear modulation formats ,the average ser can be computed as = \frac{{{a_1}\sqrt { { a_2 } } } } { { 2\sqrt \pi } } \int\limits_0^\infty { \frac{{{e^ { - { a_2}\gamma } } } } { { \sqrt \gamma } } { f_\gamma } ( \gamma ) d\gamma } , \ ] ] where is the cdf of , and is the gaussian q - function .the parameters denote the modulation formats , e.g. , for the binary phase - shift keying ( bpsk ) modulation ( * ? ? ?* eq.(6.6 ) ) .[ serpro ] the asymptotic average ser of the x - duplex relay system can be derived as where , , is the gamma function , is the incomplete gamma function , is the parabolic cylinder function .the derivation is presented in appendix c. according to ( [ cdf ] ) , when snr goes infinite , the cdf of becomes , the ser of x - duplex relay system comes to zero .for the fd mode and hd mode , the average ser can be given as for the fd mode , when snr goes infinite , the cdf of fd mode approaches . with ( [ ser ] ) and ( * ? ? ?* eq.(3.383.10 ) ) , the ser of fd mode can be obtained . from ( [ serlowerbound ] ) , it can be seen that the ser of fd mode is restricted by the lower bound , determined by self interference level , , .compared with fd mode , the x - duplex relay system removes the error floor and achieves lower ser in high snr region . by using the cdf of , the average sum rate of x - duplex systemis derived in this section . = \frac{1}{{\ln 2}}\int\limits_0^\infty { \frac{{1 - { f_\gamma } ( x)}}{{1 + x } } } dx,\ ] ] where is the cdf of .in order to simplify the final average sum rate expression , and are introduced to denote the approximate value of integral and , given in lemma [ sumrateintegral ] and [ lemmaw3p ] .[ sumrateintegral ] when , the exact value of integral is given by + \frac{{{e^ { - c{a^2}}}}}{2}{e_1}(c({b^2 } - { a^2 } ) ) - \frac{{{e^ { - \frac{{c{b^2}}}{2}}}}}{{2a}}\sum\limits_{k = 1}^\infty { { a^{2k - 2}}{c^{k - \frac{3}{2}}}{{(c{b^2})}^{\frac{1}{4 } - \frac{k}{2}}}{w_{\frac{1}{4 } - \frac{k}{2},\frac{3}{4 } - \frac{k}{2}}}(c{b^2})},\ ] ] where is the probability integral , and is the whittaker function , we use the first items of the third part of ( [ wi1 ] ) for approximation , denoted as .the derivation is presented in appendix d. [ lemmaw3p ] the approximate value of integral is given by } + { e^ { - c{\rho ^2 } + 2c\rho \frac{1}{\eta } } } \sum\limits_{k = 1}^{{n_2 } } { \sum\limits_{l = 1}^{2k } { \frac { { { { ( - c)}^k}}}{{k!{{(2c\rho ) } ^l } { { ( - \eta ) } ^{2k - l}}}}(\begin{array}{*{20}{c } } { 2k}\\ l \end{array } ) } } \left [ { \gamma ( l,{\varepsilon _ 2 } ) - \gamma ( l,{\varepsilon _ 1 } ) } \right],\ ] ] where , , , is the incomplete gamma function , first items are used to approximate value . the derivation is presented in appendix e. [ sumrate ] the average sum rate of x - duplex system can be expressed approximately as -\frac{2}{{{\lambda _ 2}{p_r}{c_2}}}{e^{\frac{c}{{2\eta } } } } { \gamma ^2}(2){w _ { - \frac{3}{2},0}}({z_1}){w _ { - \frac{3}{2},0}}({z_2 } ) \right.\nonumber\\ & & \left .{ + \frac{\eta } { { \eta - 1}}{e^{c{\rho ^2 } - \frac{1}{{{\lambda _ 1}{p_s}\eta } } } } \left [ { { w_{i1}}(1 - \rho , \rho , { n_1 } ) - \frac{1}{\eta } { w_{i1}}(\frac{1}{\eta } - \rho , 1 + \frac{1}{\eta } , { n_3 } ) - \frac{1}{\eta } { w_{i2 } } } \right ] } \right\}.\end{aligned}\ ] ] where , , , is the whittaker functions . the derivation is presented in appendix f. according to ( [ cdf ] ) and ( [ sumrate - origin ] ) , when snr goes infinite , the cdf of becomes and the average sum rate of x - duplex relay system can be derived as , it can be observed that the maximal achievable average sum rate of x - duplex relay system is not restricted by the self interference .the approximate average sum rate of fd mode and hd mode can be given as , { { \bar r}_{hd } } \approx \frac{1}{{2\ln 2}}{e^c}{e_1}(c),\ ] ] when snr goes infinite , the upper bound of fd mode can be derived ( * ? ? ?* eq.(3.195 ) ) . the upper bound of the average sum rate of fd mode is given in ( [ ratebound ] ) .it means that the practical average sum rate can not be larger than ( [ ratebound ] ) , which presents the achievable region of average sum rate of fd mode .comparing the ( [ ratexd ] ) and ( [ ratebound ] ) , the x - duplex relay system overcomes the restriction of self interference compared with fd mode . in this subsection , a lower bound and a upper bound for the x - duplex relay s end - to - end sinr with pa are provided and the cdf of these bounds are obtained .finally , the diversity order of xd - pa is derived .the lower bound and upper bound for the end - to - end sinr ( [ xdpa1 ] ) can be written as where , , , . when , the function is a monotonically increasing function .therefore , the function is also monotonic when .the cdf distribution of , is given as with ( * ? ? ?* eq.(3.322 ) ) , we can obtain the outage probability of the upper bound where , , is the pdf of , $ ] .the taylor expansion of the upper bound is it can be observed that the diversity order of xd - pa is at least one .similarly , the outage probability of the lower bound can be calculated as using taylor s formula , we can obtain where .it can be observed that the diversity order of xd - pa is at most two .similarly , the taylor expansion of the upper bound and lower bound of the outage probability of the fd mode with pa can be provided as the diversity order of fd with pa is between and .as the diversity order of x - duplex is one at high snr , the diversity order of x - duplex is higher than fd with pa at high snr .in this section , simulations are provided to validate the performance analysis of the relay system with x - duplex relay . without loss of generality, we set the snrs of source - relay and relay - destination channel as one , .the transmit power of the source and relay is set as equal under the fixed power allocation condition , .the threshold of the outage probability is set as 2 bps / hz .it is shown in that the self interference can be cancelled up to 110 db .we assume the self interference cancellation ability is between 70db and 110db .the path loss between source and relay is modeled as .therefore , the residual self interference level is set as .2 demonstrates the outage probability performance of x - duplex relay system with different self interference 0.2 , 0.05 and 0.01 .the outage performance of fd mode and hd mode is also illustrated for comparison . as can be seen , the exact outage probability curves tightly matches with the analytic expression given in ( [ outage ] ) .the figure reveals that x - duplex relay system s outage probability is lower than both fd and hd schemes . at high snr, the fd scheme has an error floor , which coincides with the analytical results in ( [ fdoutfloor ] ) .when the snr goes infinite , the x - duplex relay eliminates the error floor and remains the full diversity order , as shown in ( [ dxd ] ) and ( [ dhd ] ) .the effect of self interference on the x - duplex relay system is very small at high snr .this is because the hd mode is more likely to be selected in the x - duplex relay as the performance of fd mode is interference limited at high snr .the x - duplex benefits more from the hd mode , whose performance is independent of residual self interference and improves with the increase of transmit power .therefore , the impact of residual self interference from fd mode on x - duplex becomes smaller as snr increases and the curves of x - duplex under different become close . ,0.05 , 0.01 , the dashed lines of performance floor coincide with analytical results in ( [ fdoutfloor ] ) , and the intersection point of fd and hd mode coincides with analytical results in ( [ outageintersection]).,width=384 ] fig . 3compares the finite snr diversity order of x - duplex relay with pure fd and hd mode at .the diversity order of x - duplex relay system increases with that of fd mode from low to medium snr as fd mode is more likely to be selected in this region .when the diversity order of fd mode decreases , the performance of x - duplex relay system is influenced .as the performance of hd mode improves with snr , the diversity order of x - duplex relay system increases as hd mode is more likely to be selected .when snr goes infinity , the diversity order curve of x - duplex relay system approaches that of hd mode because fd mode encounters the performance floor . at high snr ,the x - duplex relay eliminates the error floor and achieves the full diversity order as the hd mode , which is consistent with section iii b. , the dashed lines coincide with analytical results in ( [ serlowerbound]).,width=384 ] fig . 4 plots both the analytical and simulated results of the ser in the x - duplex relay system with .the ser performance of fd and hd is depicted for comparison . from the figure, we can observe that x - duplex relay system achieves a better performance compared with pure fd and hd schemes . at high snr ,the x - duplex relay removes the performance floor .the curves of x - duplex and hd mode become close at high snr as the benefit from fd mode is limited by the residual self interference .5 depicts the average sum rate of the x - duplex system versus snr with .the approximate analytical expression in ( [ longrate ] ) tightly approaches the exact average sum rate .it can be seen from the figure that x - duplex relay system provides a higher sum rate than that of fd and hd .the performance improvement of x - duplex is most significant at medium snr ., the dashed lines coincide with analytical results in ( [ ratebound]).,width=384 ] in fig .6 , the simulated average sum rate of the x - duplex system versus self interference with different levels of transmit power is depicted . in the weak self interference region , fd achieves a higher sum rate than hd . as self interference increases ,the average sum rate of fd mode significantly decreases and performs worse than the hd mode .the average sum rate of x - duplex relay system is always better than fd and hd mode .the performance of x - duplex decreases quickly with the self interference increases , and is most obvious at high snr . when the self interference is perfectly cancelled , the average sum rate of x - duplex is twice that of hd mode .7 illustrates the outage probability of xd - pa subject to the total power constraint .the performance of the x - duplex relay system with uniform power allocation , is illustrated for comparison . according to this figure ,the outage probability performance of x - duplex relay system can be improved with adaptive power allocation compared with equal power allocation .the diversity order of xd - pa is between one and two .the performance of fd with power allocation is also plotted for comparison and the diversity order of fd - pa is between and one .it can observed that the diversity order of x - duplex is higher than fd with pa at high snr , which coincide with the analysis in section iii e. .,width=384 ] the system model of hybrid fd / hd relaying , rams scheme and x - duplex in this paper can be classified into three categories according to the deployment of antennas at the relay : ( a ) separated antenna without antenna selection [ 21 ] , ( b ) separated antenna with antenna selection [ 20 ] , ( c ) shared antenna in this paper .the major differences between these three categories can be summarized as follows : _ structure and implementation : _ in ( a ) , ( b ) , ( c ) , the number of antennas at the relay are two , two and one , respectively .the connection between the antenna and rf chain is fixed in ( a ) , however it is flexible in ( b ) and ( c ) . in ( a ) and ( b ) , asthe channels between the source and two antennas at relay may be different in practical scenarios , we need to determine which antenna is selected as tx antenna , and the other as rx antenna . in ( a ) , the decision is made at deployment time and the configuration of each antenna is fixed . in ( b ) , as each antenna can be configured as tx or rx antenna , the deployment of antennas is simpler compared with ( a ) .however , the decision system could be more complex as two antennas can be adaptively configured according to instantaneous channel information and thus more operating modes need to be considered compared with ( a ) . in [ 20 ] , there are two fd modes where two antennas are configured as tx / rx or rx / tx . in a shared antenna relay system ( c ) , since tx / rx share one single antenna , there will be no tx / rx selection process involved as there is only one channel between the source and relay ._ performance : _ compared with ( a ) , the system in ( b ) can provide an additional spatial diversity gain at the destination and improve the performance with efficient utilization of two antennas . specifically , considering one relay , the system in ( b ) achieves twice of the diversity order at low to medium snrs and a lower error floor at high snrs compared with the fixed antenna configuration ( a ) operating at fd mode [ 20 ] . comparing ( c ) and ( a ) , one shared antenna can operate the same way as two separated fixed antennas .the shared antenna can exploit antenna resources more efficiently compared with fixed antennas .thus , ( c ) is more suitable to be deployed into small equipments , such as mobile phone , small sensor nodes .the performance of x - duplex relaying system in ( c ) is the same as that of hybrid fd / hd switching in ( a ) . _complexity : _ in ( a ) and ( c ) , the csis of three channels , including the channel from source to relay , the channel from relay to destination and the self interference channel , need to be measured and sent to the decision node for decision through feedback channels , which requires feedback overhead . in ( b ) , the csis of 5n channels , including 2n channels from the source to n relays for two antenna modes at relay , n self - interference channel at n relays , and 2n channels from n relays to the destination for two antenna modes at relay , requires the feedback overhead of . from this perspective , the complexity of ( a ) and ( c ) is the same and smaller than that of ( b ) where more csis need to be estimated and transmitted .in this paper , we investigated a x - duplex relay for the af relay network , in which the relay is equipped with a shared antenna . by adaptively configuring the antenna connection with two rf chains ,the x - duplex relay system can achieve a better performance than both hd and fd schemes and eliminate the performance floor of fd caused by the residual self - interference .we also designed the xd - pa subject to the total power constraint to further improve the performance .asymptotic expressions of the cdf , outage probability , average ser performance , and average sum rate were derived .the analytic results were validated by computer simulations .both analysis and simulations demonstrated the superiority of the x - duplex relay over both fd and hd schemes .the fd mode s end - to - end sinr is given in ( 7 ) .the distribution of is mentioned in the cdf of the end - to - end sinr is expressed as the integral in ( [ fxx ] ) does not possess a closed - form solution in the scope of our knowledge .the value of the integral is mainly decided by the exponent part , especially at high snr .we adopt taylor s formula in ( * ? ? ?* eq.(1.112 ) ) to derive the asymptotic result the integral ( [ fxx ] ) is further obtained as where , , with ( * ? ? ?* eq.(3.471.9 ) ) , ( [ fdcdf ] ) is obtained .therefore , lemma [ lemmafdcdf ] can be obtained .we can write the cdf expressions of fd mode and hd mode as where , the expression can be transformed into .as the value of are positive definite , we only consider the case when .therefore , can be further simplified as we define ,\ ] ] when , , when , .the distribution of splits into two sub - probabilities , and , denoted as , .consider , we can write using the approximation in ( [ appro1 ] ) , and in high snr region , with the help of ( * ? ? ? * eq.(3.324.1 ) ) and ( * ? ? ?* eq.(3.462.20 ) ) , ( [ i1 ] ) is obtained .consider , we can write after a few mathematical manipulations , ( [ i2 ] ) is derived . therefore , lemma [ lemmahybridcdf ] is proved .as the upper limit of the integral only relate to , in the high snr region when converges to zero , the approximation is used to obtain the approximate value where , . as , ( [ w3part ] ) can be divided into two parts . with the help of and (* eq.(3.381.1 ) ) , ( [ w3p ] ) is derived .therefore , lemma [ lemmaw3p ] is proved .after substituting ( [ cdf ] ) into ( [ ser ] ) and adopting the approximation in the high snr region that converges to , and that the value of is comparatively small ( * ? ? ? * eq.(10.30 ) ) , which can be ignored for asymptotic analysis .( [ ser ] ) can be simplified as with the help of ( * ? ? ?* eq.(3.381.4 ) ) , can be denoted as denoting , with the help of ( * ? ? ?* eq.(3.383.10 ) ) , is given as denoting , when the snr is high and is around zero , approximation is used , with ( * ? ? ?* eq.(3.462.1 ) ) , is given as {e^ { - ( \frac{1}{{{\lambda _ 1}{p_s}\eta } } + { a_2 } + 2c)x - c{x^2 } - \frac{1}{{{\lambda _ 1}{p_s}\eta } } } } dx } \nonumber\\ = & & \eta { e^ { - \frac{1}{{{\lambda _ 1}{p_s}\eta } } { \rm { + } } \frac{{{\mu _ 1}^2}}{{8c}}}}{\left ( { 2c } \right)^ { - \frac{3}{4}}}\gamma ( \frac{3}{2}){d _ { - \frac{3}{2}}}(\frac{{{\mu _ 1}}}{{\sqrt { 2c } } } ) + \frac{1}{2}{\eta ^3}{e^ { - \frac{1}{{{\lambda _ 1}{p_s}\eta } } { \rm { + } } \frac{{{\mu _ 2}^2}}{{8c}}}}{\left ( { 2c } \right)^ { - \frac{7}{4}}}\gamma ( \frac{7}{2}){d _ { - \frac{7}{2}}}(\frac{{{\mu _2}}}{{\sqrt { 2c } } } ) , \end{aligned}\ ] ] where , . substituting ( [ ser1 ] ) , ( [ ser3 ] ) and ( [ ser4 ] ) into ( [ serpart ] ) , ( [ serfinal ] ) can be obtained . denoting , with the help of integral , can be derived as .\ ] ] denoting , after a few mathematical simplifications , is given as where , .the approximate value of is given in lemma [ lemmaw3p ] , we use for approximation . the exact expression of , can be derived using lemma [ sumrateintegral ] , we use the first , items to derive the approximate value . when , we use for approximation .therefore , the value of integral is obtained . denoting , after adopting , we can derive with the help of ( * ? ? ?* eq.(6.647.1 ) ) , can be obtained . for , as in the high snr region , converges to zero , is comparatively small compared with other parts in ( [ sumrate - part ] ) and can be ignored in our derivation .1 s. li , m. zhou , j. wu , l. song , y. li and h. li , `` protocol design and performance analysis for x - duplex amplify - and - forward relay networks , '' in _ ieee international conference on communications ( icc ) _ , may . 2016 .m. duarte and a. sabharwal , `` full - duplex wireless communications using off - the - shelf radios : feasibility and first results , '' in _ proc .asilomar conf .signals , syst ._ , pp . 15581562 , nov .2010 .b. p. day , a. r. margetts , d. w. bliss , and p. schniter , `` full - duplex mimo relaying : achievable rates under limited dynamic range , '' _ieee j. sel .areas commun ._ , vol . 30 , no . 8 , pp .15411553 , dec . 2012 .e. everett , m. duarte , c. dick , and a. sabharwal , `` empowering full - duplex wireless communication by exploiting directional diversity , '' in _ proc .asilomar conf .signals , syst ._ , pp . 20022006 , nov . 2011 .t. riihonen , s. werner and r. wichman , `` residual self - interference in full - duplex mimo relays after null - space projection and cancellation , '' in _ proc .asilomar conf .signals , syst ._ , pp . 653657 , nov . 2010 .a. sabharwal , p. schniter , d. guo , d. w. bliss , s. rangarajan and r. wichman , `` in - band full - duplex wireless : challenges and opportunities , '' _ieee j. sel .areas commun .9 , pp . 16371652 , sep . 2014 .oh , and d. hong , `` catching resource - devouring worms in nextgeneration wireless relay systems : two - way relay and full - duplex relay , '' _ ieee commun . mag .9 , pp . 5865 , sep .t. riihonen , s. werner , r. wichman and e. zacarias b. , `` on the feasibility of full - duplex relaying in the presence of loop interference , '' in _ proc .10th ieee workshop signal process .wireless commun ._ , pp . 275279 , jun . 2009 .t. riihonen , s. werner and r. wichman , `` comparison of full - duplex and half - duplex modes with a fixed amplify - and - forward relay , '' in _ proc . ieee wireless communications and networking conference ( wcnc ) _ , pp . 15 , apr . 2009 .taehoon kwon , sungmook lim , sooyong choi and daesik hong , `` optimal duplex mode for df relay in terms of the outage probability , '' _ ieee trans .59 , no . 7 , pp. 36283634 , sep . 2010 .i. krikidis , h. a. suraweera , p. j. smith and c. yuen , `` full - duplex relay selection for amplify - and - forward cooperative networks , '' _ ieee trans .wireless commun .43814393 , dec. 2012 .k. yang , h. cui , l. song and y. li , `` efficient full - duplex relaying with joint antenna - relay selection and self - interference suppression , '' _ ieee trans .wireless commun .14 , no . 7 , pp. 39914005 , jul .2015 .t. m. kim and a. paulraj , `` outage probability of amplify - and - forward cooperation with full duplex relay , '' in _ proc .ieee wireless communications and networking conference ( wcnc ) _ , pp . 7579 , apr . 2012 .h. ju , e. oh , and d. hong , `` improving efficiency of resource usage in two - hop full duplex relay systems based on resource sharing and interference cancellation , '' _ ieee trans .wireless commun ._ , vol . 8 , no . 8 , pp .39333938 , aug .g. liu , f. r. yu , h. ji , v. c. m. leung and x. li , `` in - band full - duplex relaying : a survey , research issues and challenges , '' _ ieee communications surveys & tutorials . _ , vol .2 , pp . 500524 , secondquarter 2015 . b. p. day , a. r. margetts , d. w. bliss and p. schniter , `` full - duplex bidirectional mimo : achievable rates under limited dynamic range , '' _ ieee trans . signal process .60 , no . 7 , pp. 37023713 , jul .t. m. kim , h. j. yang and a. j. paulraj , `` distributed sum - rate optimization for full - duplex mimo system under limited dynamic range , '' _ ieee signal process lett .6 , pp . 555558 , jun . 2013 .r. narasimhan , a. ekbal and j. m. cioffi , `` finite - snr diversity - multiplexing tradeoff of space - time codes , '' in _ proc .ieee international conference on communications ( icc ) _ , vol . 1 , no . 6 , pp .458462 , may . 2005 .z. zhang , k. long , a. v. vasilakos and l. hanzo , `` full - duplex wireless communications : challenges , solutions , and future research directions , '' in proceedings of the ieee , vol .104 , no . 7 ,1369 - 1409 , july 2016 .
in this paper , we study a x - duplex relay system with one source , one amplify - and - forward ( af ) relay and one destination , where the relay is equipped with a shared antenna and two radio frequency ( rf ) chains used for transmission or reception . x - duplex relay can adaptively configure the connection between its rf chains and antenna to operate in either hd or fd mode , according to the instantaneous channel conditions . we first derive the distribution of the signal to interference plus noise ratio ( sinr ) , based on which we then analyze the outage probability , average symbol error rate ( ser ) , and average sum rate . we also investigate the x - duplex relay with power allocation and derive the lower bound and upper bound of the corresponding outage probability . both analytical and simulated results show that the x - duplex relay achieves a better performance over pure fd and hd schemes in terms of ser , outage probability and average sum rate , and the performance floor caused by the residual self interference can be eliminated using flexible rf chain configurations . full duplex , amplify - and - forward relaying , mode selection , power allocation .
ordering dynamics is not only a classical subject of non - equilibrium statistical physics , but also one of the most studied issues in the field of sociophysics .it often represents opinion dynamics under the most common type of the social influence , known as conformity . among many others , models with binary opinionsare of particular interest .one of the most general models of binary opinion dynamics was introduced by castellano et al . under the name the -voter model as a simple generalization of the original voter model . in the proposed model , randomly picked neighbors ( with possible repetitions ) influence a voter to change its opinion .if all neighbors agree , the voter takes their opinion ; if they do not have an unanimous opinion , the voter can still flip with probability .it has been argued that for and the -voter model coincides with the modified sznajd model , in which unanimous pair ( in one dimension ) of the neighboring sites influences one of two randomly chosen neighbors i.e. or . following this reasoning in ref . we have introduced a modified one - dimensional version of the -voter model , as a natural extension of the original voter and the sznajd model : a panel of neighboring spins is picked at random. if all neighbors are in the same state , they influence one of two neighboring spins or .if not all spins in the -panel are equal then nothing changes in the system .this modification has been later considered in refs . . for both formulations of the -voter model seem to be almost identical with the exception of the repetitions possible in the original version .however , there is another difference between formulations and , namely the first belongs to the class of so called inflow and the second to outflow dynamics .there was a controversy related to the subject if the inflow and outflow dynamics are equivalent .recently , it has been shown that they are equivalent for , at least in respect to the exit probability .however for larger values of , even in one dimension the situation is not clear .moreover , differences between dynamics in respect to the phase transitions induced by the stochastic noise has not been investigated up till now . at first glance, it seems to be trivial that choosing a different set of interaction partners will lead to different results on the macroscopic scale .however , as described above , the problem occurred to be not as simple as it seems and gained the attention in the literature. therefore , one of the aims of this paper is to contribute to the outflow - inflow discussion .the second , probably more significant , aim is related to applications of the -voter model in social sciences , because as noted by macy and willer _ there was a little effort to provide analysis of how results differ depending on the model designs _moreover , in respect to social applications one could ask the question how to construct the group of influence to create easier an order ( consensus ) in the system ?another question one could ask is the problem relevant for any type of a network or maybe for some networks different types of the group of influence will lead to the same results at the macroscopic scale ? only in the case of a complete graph the definition of the -voter model is straightforward since on this topology all spins are neighbors , all proposed versions of the -voter model are equivalent . in the context of opinion dynamicsit would be however desirable to consider the models on top of more complex networks , as they are better representations of contact patterns observed in the social systems .there are already several attempts to generalize the -voter model to complex networks .however , as shown in ref . , even in the simple case of transferring the model from 1d chain to a 2d square lattice there is no unique rule of choosing the group of influence . * thus , the main goal of this paper is to check , how different ways of picking up the group may impact the macroscopic behavior of the model . * specifically , we will focus here on the phase transitions induced by the stochastic noise that represents one type of the social response , known as independence .within the modified -voter model we consider a set of agents called spinsons .this name , being a combination of the words _`` spin '' _ and _ `` person '' _ , is used to emphasize that the ising spins in our model represent persons characterized by only one binary trait ( a detailed explanation of this notion may be found in ref .each -th spinson has an opinion on some issue that at any given time can take one of two values ( `` up '' and `` down '' ) .the opinion of a spinson may be changed under the influence of its neighbors according to two different types of the social response : * _ independence _ is a particular type of non - conformity .it should be understood as unwillingness to yield to the group pressure .independence introduces indetermination in the system through an autonomous behavior of the spinsons . * _ conformity _ is the act of matching spinson s opinion to a group norm .the nature of this interaction is motivated by the psychological observations of the social impact dating back to asch : if a group of spinson s neighbors unanimously shares an opinion , the spinson will also accept it .other types of the social response are possible as well ( see ref . for an overview ) , but the above two are of particular interest for studying opinion dynamics .we study the model by means of monte carlo simulations with a random sequential updating scheme .each monte carlo step consists of elementary events , each of which may be divided into the following steps : ( 1 ) pick a spinson at random , ( 2 ) decide with probability , if the spinson will act as independent , ( 3 ) if independent , change its opinion with probability , ( 4 ) if not independent , let the spinson take the opinion of its randomly chosen group of influence , provided the group is unanimous .more details on the dynamic rules of the model may be found in ref .it is worth to stress here the difference between the modified -voter model with independence and the original -voter model with .in fact one could introduce a generalized model with both parameters and , in which each elementary time step is described by the following algorithm : 1 .choose at random one spinson located at site .2 . decide with probability , if the spinson will act independently .3 . in case of independence, a spinson flips to the opposite state with probability 1/2 .4 . in other case ( conformity ) , choose neighbors of site ( a so called -panel ) : 1 .if all the neighbors are in the same state , i.e. -panel is unanimous , the spinson takes the state of the neighbors .otherwise , i.e. if -panel is not unanimous , spinson flips with probability .clearly , the original -voter model is a special case of the above algorithm with and the model considered here is a special case with .note , that in contrast to , parameter does not describe the independence . for ,if only the unanimous -panel exists , the spinson will take its state , which means that it never acts independently . in consequence ,the state with all spinsons in the same state is the absorbing steady state for the original -voter model , whereas it is not for the model considered here unless .therefore , the original -voter model with is not suitable to model e.g. diffusion of innovation , for which the initial state with all spinsons down ( unadopted ) is a typical one .introduction of a group pressure as one of the rules governing the dynamics assumes some form of interactions between the spinsons .those interactions are best illustrated as connections between nodes of a graph the spinsons are living on . in its original formulation , the -voter model can be easily investigated on an arbitrary lattice and the definition is clear , since individuals influencing the voter are chosen with repetitions from the nearest neighborhood of the voter .therefore , even in one - dimension the parameter can have an arbitrary value .although such a definition of the model leads to interesting results from the physical point of view , it seems to be sociologically unreliable . in the modified version ,repetitions are forbidden , which is probably more sociologically justified . moreover , influencing agents may form panels of different kinds , including structures proposed in , which is also different from the original formulation of the model .this in turn allows to investigate the role of the group structure .however , such a modified definition causes ambiguity in mapping the model on an arbitrary graph .we use here both the watts - strogatz and the barabsi - albert networks as the underlying topology of spinson - spinson interactions , since they nicely recover the small world property of many real social systems .we set for two reasons : ( 1 ) to reflect the empirically observed fact that a group of four individuals sharing the same opinion has a high chance to convince the fifth , even if no rational arguments are available and ( 2 ) to compare our results with those obtained on the square lattice . chosen from the plethora of possibilities , in fig .[ influence_group ] six different groups of influence on a complex network are schematically shown .we would like to stress here that the choice of precisely such groups is not accidental and is dictated mainly by earlier papers : * - after picking up a random target spinson ( marked with a double red circle in the figure ) , we randomly choose one of its neighbors , then one of the neighbors of the neighbor and finally a neighbor of the latter one .all members of the -panel are indicated with a blue circle in the figure .this is the natural generalization of the 1d -voter model and was used e.g. in ref . . * - the group consists of a random neighbor of the target spinson , and three neighbors of the neighbor .this method resembles to some extent the -block used on square lattices in 2d and was used for instance in ref . . * - four randomly chosen nearest neighbors of the target spinson are in the group .this method was used in the original -voter model . * - this is a slight modification of the method leading to an extended range of the influence : the group is composed of three randomly chosen nearest neighbors of the target spinson , and a neighbor of one of those nearest neighbors . we have introduced this method just to investigate the impact of the range of an influence group on system s ability to stay in an ordered state .* - a spinson and its three neighbors build the the group of influence as in the method .however , the block may be located anywhere on the network .this method has been chosen as a reference for the mean - field type approach represented by the next method and similarly as it is aimed to investigate the role of the range of interaction . * - the group consists of four randomly chosen spinsons , not necessarily connected with the target spinson .this corresponds to the mean - field approach , for which analytical results on the phase transitions are already known ., title="fig : " ] , title="fig : " ] + , title="fig : " ] , title="fig : " ] + , title="fig : " ] , title="fig : " ] note that in case of and methods we actually abstract away from the underlying network topology of the model .we expect both methods to be equivalent to the complete graph case if the minimum degree of a node in the network is bigger than or equal to ( otherwise the may differ slightly from the complete graph , because there will be not always enough spinsons in the neighborhood to build the influence group ) .although we use mostly as a benchmark for our simulations , the is much more interesting because it corresponds to a situation often encountered in many organizations , in which an informal and unknown network of interactions is over imposed on the given formal communication structure . as a function of the independence factor for different groups of influence on the square lattice .as seen , there is no phase transition for nn model on the square lattice and the critical value of increases with the interaction range , as expected.[fig_sq ] ] as a function of the independence factor for six different groups of influence on the barabsi - albert network of size and parameters and .four models ( block , line , randblock and rand ) collapse into a single curve and only two ( nn and nn3 models ) can be distinguished from others.[fig_ba ] ]the main goal of this paper is to answer the questions if and how details at the microscopic level manifest at the macroscopic scale . among other macroscopic phenomena ,phase transitions are certainly the most interesting ones . for the models of opinion dynamics ,the most natural order parameter is an average opinion , defined as magnetization i.e. .it has been shown that in the case of the -voter model the phase transition may be induced by the independence factor .below the critical value of independence , , the order parameter . for high independence , , there is a status - quo , i.e. .such results were obtained on the complete graph topology which corresponds to the mean field approach , as well as on the square lattice but only for the one particular choice of the q - panel equivalent to the sznajd model . in a box of four neighboring spinsons were chosen randomly and influenced one of the 8 neighboring sites of the box . herewe test six different methods described in the previous section ( see fig .[ fig_sq ] ) .it is seen that the critical value of the independence factor strongly depends on procedure of choosing an influence group .the phase transition is observed for all methods except of , which corresponds to so called inflow dynamics .as expected , and methods overlap and agree with mfa result found in , i.e. , which for gives .methods for which the range of interaction is shorter tend to show lower critical value of .this result is very intuitive , since the infinite range of interactions usually corresponds to mfa and gives the largest critical value .results on the barabsi - albert ( ba ) network ( see fig . [ fig_ba ] ) are less intuitive .it occurs that for this topology differences between methods are almost negligible .the phase transition is observed for all six models and the critical value of changes only slightly with method .four models ( block , line , randblock and rand ) collapse into a single curve and only two ( nn and nn3 models ) can be distinguished from others .the natural question arises here - why differences between models are clearly visible on the square lattice and are almost negligible in the case of ba ?it should be recalled that the average path length for the square lattice increases with the system size as , whereas in the case of ba as .it means that for the same system size the average path length is dramatically shorter on ba than on the square lattice . in resultthe range of interactions on ba is effectively much larger . to check the role of the average path length we have simulated all 6 methods on the watts - strogatz network .this topology is particularly convenient because for the fixed network size it is possible to decrease the average path length by increasing the rewiring probability . as a function of the independence factor for different groups of influence on the watts - strogatz network of size with the average degree and rewiring parameter .the critical value of independence increases with the range of interactions , as expected .however , with increasing differences between models vanish and for large only two models ( nn and nn3 ) can be distinguished from others.[fig_ws ] ] results for several values of are presented in fig .[ fig_ws ] .as increases the critical point shifts towards higher values and simultaneously differences between 4 methods vanish up to a threshold value .results for larger values of ( i.e. ; not shown in fig . [ fig_ws ] ) are identical with those obtained for . to check how results scale with the system size we simulated models on networks of sizes from to ( see fig .[ fig_wsl ] ) .surprisingly , results are virtually independent on the system size .analogous results has been obtained for the exit probability in one dimensional system with inflow and outflow dynamics . as a function of the independence factor on the watts - strogatz network with the average degree and rewiring parameter for several system sizes .results are not influenced significantly by the system size and the critical value of independence increases with the range of interactions , as expected.[fig_wsl ] ] the fact that results do not depend on the system size undermines our predictions that the average path length itself determines if all mapping methods overlap or not , because the increases with the system size .however , one should probably not look at the path length itself but at the relative path length , which is defined as the average path length of a given network divided by the average path length of a random network of the same size and average degree . normalizing networks characteristics by those of the corresponding random graphs is a procedure usually used to compare networks of different sizes .thus , we will use the relative path length to describe the networks under consideration and to compare them .for example for the watts - strogatz network with and the relative path length is equal for and for , i.e. almost size independent .for barabsi - albert of size it is much shorter i.e. and almost does not change with the system size .interestingly , if one consider relative path length for watts - strogatz of size and with different it occurs that relative path length approaches for and precisely : .this somehow explains why results for ba and ws with are almost identical . to checkif our predictions about the relative path length are correct , we decided to investigate the problem on several networks of the same size , including real twitter networks .we took twitter data from the stanford large network dataset collection available at https://snap.stanford.edu/data/egonets-twitter.html , because it includes about 1000 different networks with a broad spectrum of diverse characteristics .it was relatively easy to find in the dataset networks of the same size , but with different average path lengths and/or clustering coefficients .thus the dataset was well suited for testing our hypothesis about the path lengths .magnetization as a function of the independence factor on six different networks of size is presented in fig .[ fig_real ] . in the top row results on three real twitter networks are presented .networks in the left and middle top panels have almost identical path length but different clustering coefficient . on the other hand ,middle and right networks have almost identical clustering coefficient but slightly different path length .it seems that results for all methods overlaps the best on the right network , which has the shortest path length .simultaneously , it seems that results on left and middle networks are the most similar to each other , i.e. path length is more significant than clustering coefficient in determining if all mapping methods will give the same result or not .however , because the differences between properties that we take into account ( i.e. and ) do not vary much from network to network , for all three twitter networks almost all methods collapse into a single curve and only and slightly deviate from others . in the bottom row of fig .[ fig_real ] results on three artificial networks are presented .we took two watts - strogatz networks ( left and middle plots ) and a barabsi - albert one . in case of the ws network with the clustering coefficient similar to twitter networks and much longer path length , each mapping gives a completely different result ( bottom left ) . on the other hand ,if the path lengths are similar to those of real networks , for both ws and ba models we observe already known behavior - most methods collapse into one curve and only and differ slightly from the others .thus , this feature does not depend on the network topology and the clustering coefficient , which again confirms that the path length is the significant property for the investigated problem .as a function of the independence factor on six different networks of size . in the top row results on three real twitter ego networks are presented whereas in the bottom row results on three artificial networks are shown .the twitter network ids in the titles of the plots correspond to file names ( ` ego ' node ids ) in the twitter dataset taken from ` https://snap.stanford.edu/data/egonets-twitter.html ` . in the bottom row results on three model networksare shown : a watts - strogatz network with the average degree and the rewiring probability ( bottom left ) , a ws network with and ( bottom middle ) and a barabsi - albert one with the number of new edges ( bottom right).[fig_real],title="fig : " ] as a function of the independence factor on six different networks of size . in the top row results on three real twitter ego networks presented whereas in the bottom row results on three artificial networks are shown .the twitter network ids in the titles of the plots correspond to file names ( ` ego ' node ids ) in the twitter dataset taken from ` https://snap.stanford.edu/data/egonets-twitter.html ` . in the bottom row results on three model networksare shown : a watts - strogatz network with the average degree and the rewiring probability ( bottom left ) , a ws network with and ( bottom middle ) and a barabsi - albert one with the number of new edges ( bottom right).[fig_real],title="fig : " ] as a function of the independence factor on six different networks of size . in the top row results on three real twitter ego networks presented whereas in the bottom row results on three artificial networks are shown .the twitter network ids in the titles of the plots correspond to file names ( ` ego ' node ids ) in the twitter dataset taken from ` https://snap.stanford.edu/data/egonets-twitter.html ` . in the bottom row results on three model networksare shown : a watts - strogatz network with the average degree and the rewiring probability ( bottom left ) , a ws network with and ( bottom middle ) and a barabsi - albert one with the number of new edges ( bottom right).[fig_real],title="fig : " ] + as a function of the independence factor on six different networks of size . in the top row results on three real twitter ego networks presented whereas in the bottom row results on three artificial networks are shown .the twitter network ids in the titles of the plots correspond to file names ( ` ego ' node ids ) in the twitter dataset taken from ` https://snap.stanford.edu/data/egonets-twitter.html ` . in the bottom row results on three model networksare shown : a watts - strogatz network with the average degree and the rewiring probability ( bottom left ) , a ws network with and ( bottom middle ) and a barabsi - albert one with the number of new edges ( bottom right).[fig_real],title="fig : " ] as a function of the independence factor on six different networks of size . in the top row results on three real twitter ego networks presented whereas in the bottom row results on three artificial networks are shown .the twitter network ids in the titles of the plots correspond to file names ( ` ego ' node ids ) in the twitter dataset taken from ` https://snap.stanford.edu/data/egonets-twitter.html ` . in the bottom row results on three model networksare shown : a watts - strogatz network with the average degree and the rewiring probability ( bottom left ) , a ws network with and ( bottom middle ) and a barabsi - albert one with the number of new edges ( bottom right).[fig_real],title="fig : " ] as a function of the independence factor on six different networks of size . in the top row results on three real twitter ego networks presented whereas in the bottom row results on three artificial networks are shown .the twitter network ids in the titles of the plots correspond to file names ( ` ego ' node ids ) in the twitter dataset taken from ` https://snap.stanford.edu/data/egonets-twitter.html ` . in the bottom row results on three model networksare shown : a watts - strogatz network with the average degree and the rewiring probability ( bottom left ) , a ws network with and ( bottom middle ) and a barabsi - albert one with the number of new edges ( bottom right).[fig_real],title="fig : " ] from obtained results we also conclude that nor the average degree neither the degree distribution are significant for the investigated problem . we realize that there are more properties of the real networks that could be taken into account , but our prediction about the importance of the relative path length seems to be reasonable also from the statistical physics point of view .it should be noted that for all networks considered in this paper two methods collapse into a single curve - randblock and rand and overlap mean - field results presented in .this can be easily understood in the case of rand method - regardless of the topology , neighbors are randomly chosen . in the case of randblock ,obtained results are less obvious. however , this method introduces interactions with the infinite range .we know from statistical physics that the mean field approach should give exact results in the case of infinite interactions . following this reasoningwe can also understand why , with decreasing path length , results for all mapping methods approaches the mean field results - the relative interaction range increases .another phenomena that can be understood on this basis is the fact that method differs the most from the and is the most similar . method has relatively the shortest range of interactions and much larger .the differences between methods may be also explained , at least qualitatively , in terms of probabilities of finding non - unanimous influence groups . for the sake of simplicity let us assume that our network has the topology of a bethe lattice with the coordination number . for convenience, we took a modified definition of the bethe lattice with the central node having only neighbors ( fig .[ influence_group trees ] ) .thus , the central node has neighbors in its closest neighborhood , agents at the second level of its ego graph , and in general nodes at distance .now , let us consider our model at an early stage of a simulation .let us assume that there are only two spinsons including the central one in the `` down '' state due to independence and that the central spinson has been chosen again in a basic monte carlo event ( it will be referred as the * target * spinson in the following ) . however this time it is not independent , i.e. it is exposed to the group pressure . since most of the spinsons are in the `` up '' state , the system has a natural tendency to reduce disorder due to conformity .nevertheless , we can ask the question whether there are significant differences between the methods in maintaining disorder in the system .in other words we can check if the methods differ in the probabilities of finding a non - unanimous group of influence in this situation . and the root node having neighbors .there are only two not - adopted spinsons ( red ) in the system and one of them ( in double red circle ) is exposed to group pressure ( group members are marked with blue circles ) .top row : if the other red spinson resides in the closest neighborhood of the target one , the method ( left ) gives much higher probability of finding a non - unanimous group of influence than the one ( right ) .bottom : if the not - adopted spinson is at the second level , the method yields higher probability than ( in case the probability is zero ) .[ influence_group trees ] , title="fig : " ] and the root node having neighbors .there are only two not - adopted spinsons ( red ) in the system and one of them ( in double red circle ) is exposed to group pressure ( group members are marked with blue circles ) .top row : if the other red spinson resides in the closest neighborhood of the target one , the method ( left ) gives much higher probability of finding a non - unanimous group of influence than the one ( right ) .bottom : if the not - adopted spinson is at the second level , the method yields higher probability than ( in case the probability is zero ) .[ influence_group trees ] , title="fig : " ] + and the root node having neighbors .there are only two not - adopted spinsons ( red ) in the system and one of them ( in double red circle ) is exposed to group pressure ( group members are marked with blue circles ) .top row : if the other red spinson resides in the closest neighborhood of the target one , the method ( left ) gives much higher probability of finding a non - unanimous group of influence than the one ( right ) .bottom : if the not - adopted spinson is at the second level , the method yields higher probability than ( in case the probability is zero ) .[ influence_group trees ] , title="fig : " ] and the root node having neighbors .there are only two not - adopted spinsons ( red ) in the system and one of them ( in double red circle ) is exposed to group pressure ( group members are marked with blue circles ) . top row : if the other red spinson resides in the closest neighborhood of the target one , the method ( left ) gives much higher probability of finding a non - unanimous group of influence than the one ( right ) .bottom : if the not - adopted spinson is at the second level , the method yields higher probability than ( in case the probability is zero ) .[ influence_group trees ] , title="fig : " ] to this end , we can consider configurations with the other `` down '' spinson residing at different levels of the target s ego graph .we start with the `` down '' spinson being in the nearest neighborhood of the target node . in this casewe expect the method to give the highest probability to build a group the `` down '' spinson belongs to .the reason is simple : this is the only method which operates exclusively in the nearest neighborhood of the target spinson .thus we draw 4 agents out of to form the group and it is very likely that the `` down '' spinson will belong to the group ( see top left plot of fig .[ influence_group trees ] for a schematic representation ) .the probability of finding a non - unanimous group is slightly smaller in case , because here only 3 drawings out of from the closest neighborhood are allowed .the and methods require only one drawing from the first level ( top right plot of fig .[ influence_group trees ] ) .hence it is less likely to hit the `` down '' spinson .finally , and algorithms yield the smallest probabilities , because they operate on the whole network rather than in the close neighborhood of the target node .since the problem at hand is nothing but a variation of an urn problem , we can actually calculate for each method the probability of finding a non - unanimous group of influence .to focus our attention we set and the system size meaning that the ego graph of the target spinson consists of 4 levels . in the case of the methodthe number of all possible -panels is just the number of 4-combinations selected from the nearest neighborhood of the target , the not adopted agent at the first level has to belong to each non - unanimous group . the other three members are selected from seven `` up '' spinsons residing at that level .thus , the number of non - unanimous groups in the method is equal to this yields the following probability of finding a non - unanimous group : the superscript in the last expression indicates the level the other `` down '' spinson belongs to . in the method we select three agents from the first level , then pick one out of them and draw one spinson from its neighbors at the next level .this gives possible influence groups .once the `` down '' spinson is chosen , we select two others from the first level , then pick one of the members of the group and add one of its neighbors to the group .the number of all possibilities is in this case given by hence , similar analysis leads to the following results for the other methods : we see that indeed the method gives the highest probability of leaving the target spinson untouched in this case .if the other `` down '' spinson resides at the second level of target s ego graph , the method should give the highest probability of maintaining disorder in the system , because it consists of drawings mostly from that level ( bottom row of fig .[ influence_group trees ] ) .again , we can calculate the corresponding probabilities to get : the method gives indeed the highest probability , followed by and .the probability for is zero , because the method operates only at the first level .it is interesting to note that the probabilities for the second level are much lower than those for the first one . with similar reasoningwe can show that in general the farther the distance between the two `` down '' spinsons , the smaller the chance to maintain disorder , i.e. to let the target spinson unchanged . as a consequence ,only the state of the two closest levels is actually significant for the evolution of target s opinion .for this reason the methods and , followed by and destroy the order in the systems the fastest , i.e. for relative small values of the independence parameter .note that the above conclusion is in accordance with our simulation results shown in figs .[ fig_ba ] , [ fig_ws ] and [ fig_real ] .thus , although a bethe lattice resembles ego graphs of model and real networks on average only , the reasoning remains the same - if the `` down '' agents are sparse , the and methods yield the highest probability to maintain disorder in the system . from our simulationsit follows that in networks with short relative average path lengths the differences between all but and are negligible .a short path length means usually that the ego networks of all agents are rather `` flat '' , i.e. most agents reside at very few levels . in this casealready the second and next levels of an ego graph are highly populated , leading to negligible probabilities of finding non - unanimous groups at the beginning of a simulation for methods operating mainly beyond the first level .hence , on such networks the results for four methods ( , , and ) are essentially indistinguishable .the and methods deviate slightly from others giving a bit smaller critical values of the independence , i.e. destroying the order a bit faster ( see fig .[ fig_ba ] , bottom row of fig .[ fig_ws ] and most of the plots in fig .[ fig_real ] for further reference ) .from physical point of view it is always an interesting question how details at the microscopic scale manifest at the macroscopic level . in the field of opinion dynamicssuch a macroscopic quantity is the opinion , defined in a case of binary models as the magnetization . in this paperwe examine six models that differ only in the way of selecting a group of influence but the size of this group remains fixed .therefore there are no differences between models on the complete graph . for other topologies methods forwhich range of interaction is shorter tend to show the lower critical value of the independence factor below which and above .only two methods , and , give exactly the same results on all topologies and overlap mfa result found in . remaining four methods give different results and differences between methods increase with the relative path length , i.e. are the most visible on the regular lattices . with decreasing relative path lengththe differences between methods vanish .one should notice that the average path length itself does not determine the differences between models , because results are virtually independent on the system size .what determines network properties is the relative path length defined as the ratio between the average path length of considered network and the average path length of a random graph of the same size and average degree .it should be noted that most of the real - world networks are characterized by relatively short paths and therefore differences between models should be negligible .we believe that our results contribute also to the discussion on differences between inflow and ouflow dynamics . as noted by dietrich stauffer : _ the crucial difference of the sznajd model compared with voter or ising models is that information flows outward : a site does not follow what the neighbors tell the site , but instead the site tries to convince the neighbors _ .of late , a debate on whether inflow dynamics is different from outflow dynamics has emerged .our findings indicate that not the direction of the information flow itself but the range of interactions is important , what coincides with results obtained by castellano and pastor - satorras .it is worth to notice that some of rules , investigated here , may be viewed as inflow and other as outflow dynamics . in particular, the method corresponds to the inflow dynamics . on the other hand ,the method was inspired by the two dimensional and the model by the one dimensional outflow dynamics .therefore both can be viewed as outflow dynamics .both outflow rules ( and ) give the same results on scale - free and real twitter networks , whereas the inflow rule gives lower value of the critical value of .however , it seems that the critical value of increases with the relative range of interactions and therefore it is understandable that ( inflow ) rule gives the lowest value of .so perhaps one should not think about the direction ( in or out ) of the information flow itself but the range of interactions , what would coincide with results obtained by castellano and pastor - satorras . summarizing , indeed inflow and outflow dynamics give different results but the reason is simply the difference in the range of interactions .as already mentioned in the introduction , the main motivation for this paper was the remark by macy and willer that _ there was a little effort to provide analysis of how results differ depending on the model designs _ . in the context of the problem posed here, it would seem that the structure of the group of influence may be important from the social point of view .however , as we have shown in the case of many complex networks , including ba and real networks , the importance of the group structure is often negligible . in this paperwe considered only static networks ( not changing in time ) , which is a common approach while studying dynamical processes like opinion spreading or diffusion of innovation .however , the characteristics of many real networks evolve in time and there is more and more data available on temporal networks .if the changes take place at time scales comparable to those of studied processes , the temporal heterogeneities in such networks may lead to big differences in the dynamics of the processes , even if the networks appear similar from the static perspective .thus , it could be worth to check what is the impact of the group structure in models put on top of real temporal networks .this issue will be addressed in one of the forthcoming papers .this work was supported by funds from the national science centre ( ncn ) through grant no .2013/11/b / hs4/01061 .krapivsky , s. redner , e. ben - naim , _ a kinetic view of statistical physics _ , cambridge university press ( 2010 ) p. moretti , s. liu , c. castellano , r. pastor - satorras , j. stat .phys . 151 ( 2013 ) 113 - 130 c. castellano , s. fortunato and v. loreto , rev .81 ( 2009 ) 591 - 646 s. galam , _ sociophysics : a physicist s modeling of psycho - political phenomena _ , new york springer ( 2012 ) p. sen and b. k. chakrabarti , _ sociophysics : an introduction _ , oxford university press ( 2013 ) c.castellano , m.a .muoz , r.pastor-satorras , phys .e 80 ( 2009 ) 041129 p.clifford , a.sudbury , biometrika 60 ( 1973 ) 581 f. slanina , k. sznajd - weron , p. przybyla , epl 82 ( 2008 ) 18006 p. przybyla , k. sznajd - weron , m. tabiszewski , phys .e 84 ( 2011 ) 031117 a. m. timpanaro and c. p. c. do prado , phys .e 89 ( 2014 ) 052808 a. m. timpanaro and s. galam , phys .rev e 92 ( 2015 ) 012807 k. sznajd - weron , k. suszczyski , j. stat .( 2014 ) p07018 k. sznajd - weron , s. krupa , phys .e 74 ( 2006 ) 031109 p. roy , s. biswas and p. sen , phys .e 89 ( 2014 ) 030103 l. behera and f. schweitzer , international journal of modern physics c 14 ( 2003 ) 1331 s. galam , _ local dynamics vs. social mechanisms : a unifying frame _ europhysics letters 70 ( 2005 ) 705 - 711 c. castellano , r. pastor - satorras , phys .e 83 ( 2011 ) 016113 s. galam and a. c. r. martins , europhysics letters 95 ( 2011 ) 48005 m. w. macy and r. willer , annu .sociol . 28 ( 2002 ) 143 - 166 p. nyczka , k. sznajd - weron , j. cislo , phys .e 86 ( 2012 ) 011105 p. nyczka and k. sznajd - weron , j. stat .( 2013 ) 174 - 202 r. albert , a.l .barabsi , rev .74 ( 2002 ) 47 m. newman , siam review 45 ( 2003 ) 167 v. sood , t. antal and s. redner , phys .e 77 ( 2008 ) 041121 p. moretti , s. liu , a. baronchelli and r. pastor - satorras , eur .j. b 85 ( 2012 ) 88 k. sznajd - weron , j. szwabiski , r. weron and t. weron , j. stat .( 2014 ) 3007 d. stauffer , a.o .sousa and s. moss de oliveira , int .c 11 ( 2000 ) 1239 - 1245 p. nail , g. macdonald , and d. levy , psychological bulletin 126 ( 2000 ) 454 - 470 k. sznajd - weron , m. tabiszewski and a. timpanaro , epl 96 ( 2011 ) 48002 s.e .asch , scient amer 193 ( 1955 ) 31 d. j. watts and s. h. strogatz , nature 393 ( 1998 ) 440 a .-barabsi and r. albert , science 286 ( 1999 ) 509 a .-l barabsi , science 325 ( 2009 ) 412 - 413 .doi : 10.1126/science.1173299 d.g .myers , _ social psychology _( 11th ed . ) , 2013 , new york : free press s. goswami , s. biswas and p. sen , physica a 390 ( 2011 ) 972 a. fronczak , p. fronczak , and j. a. hoyst , phys . rev .e 70 ( 2004 ) 056110 q.k .telesford , k.e .joyce , s. hayasaka , j.h .burdette and p.j .laurienti , brain connect . 1 ( 2011 ) 367 - 375 a. chmiel , p. klimek and s. thurner , new j. phys .16 ( 2014 ) 115013 j. mcauley and j. leskovec , _ learning to discover social circles in ego networks _ , nips , 2012 m. ostilli , physica a 391 ( 2012 ) 3417 n.l .johnson and s. kotz , _ urn models and their application : an approach to modern discrete probability theory _ , wiley ( 1977 ) d. stauffer , computer physics communications 146 ( 2002 ) 93 - 98 n. eagle and a. pentland , pers ubiquit comput 10 ( 2006 ) 255 r. k. pan and j. saramki , phys .e 84 ( 2011 ) 016105
we propose and compare six different ways of mapping the modified -voter model to complex networks . considering square lattices , barabsi - albert , watts - strogatz and real twitter networks , we ask the question if always a particular choice of the group of influence of a fixed size leads to different behavior at the macroscopic level . using monte carlo simulations we show that the answer depends on the relative average path length of the network and for real - life topologies the differences between the considered mappings may be negligible . opinion formation , opinion dynamics , q - voter model , agent - based modelling , social influence , complex networks
it is well known that entanglement is a resource that can be used for a number of tasks , for example , teleporting a quantum state from one system to another .more recently , other quantum properties have been explored as resources .the most recent is coherence .coherence is a basis - dependent property , and it depends on the off - diagonal matrix elements of the density matrix expressed in that basis .the standard example is that of a particle going through an interferometer . in order to see an interference pattern at the output, there has to be coherence between the paths the particle can take inside the interferometer .one way to decrease the coherence between the paths is to gain information about which path the particle took , and doing so decreases the visibility of the interference pattern - . in two different ways of quantifying coherencewere proposed , and we shall make use of one of them . in order to treat a property , such as entanglement or coherence , as a resource , one needs a measure in order to quantify how much of that resource one has . in the case of a pure , bipartite entangled state ,the von neumann entropy of one of the reduced density matrices of the state has proven to be a useful measure . in the case of coherence, one defines a set of incoherent states ( this set is basis - dependent ) , and the coherence of a state can be characterized by its distance from this set . in ,several possible distances were explored , and two with particularly nice properties were singled out .one is based on relative entropy , and the other on the norm of the density matrix . herewe shall use the latter . in the context of coherence as a resource , it is useful to see how the performance of a quantum algorithm that depends on coherence changes as the amount of coherence in the system decreases .one of the first quantum algorithms , the deutsch - jozsa algorithm , depends on quantum coherence for its operation , and it is particularly simple . in fact , it can be rephrased as a particle going through a multi - arm interferometer and looking at the interference pattern at the output .we will use a quantum walk version of the deutsch - josza algorithm to show this .the deutsch - josza algorithm solves a decision problem and does the following .one is given an oracle that evaluates a boolean function , which is promised to be constant or balanced , and ones task is to determine which .we will assume that our boolean function maps -bit strings to either or , and if the input to the oracle is the string , its output is .a constant function is the same on all inputs and a balanced one is on half of the inputs and on the others . in the worst case scenario, one would have to check inputs to be certain which kind of function one had , while in the quantum case only one function evaluation is necessary .if one is willing to accept a probabilistic answer , classically one would only have to check a few inputs in order to determine which type of function the oracle represented with a small probability of making a mistake .consequently , the deutsch - jozsa algorithm is not a practical one , but it does serve to illustrate how quantum mechanics allows one to perform tasks in a different way than would be possible on a classical computer , and gain some quantum advantage . the classical - quantum comparison can be made precise by asking for the probability of obtaining the correct answer , constant or balanced , in a fixed number of runs . herewe wish to examine the effect of decoherence on the performance of the deutsch - jozsa algorithm , and a variant of it , using a recently defined measure of coherence .the deuthsch - jozsa algorithm depends on quantum coherence , and the less of it there is , the worse the algorithm will perform .we wish to make this statement quantitative using one of the measures for coherence proposed in and several different measures for the performance of the algorithm .we will see how the amount of coherence affects our ability to distinguish the balanced and constant cases for a fixed number of measurements and compare this to the result of a classical procedure .we will then examine modified decision problem , deciding between a balanced function and one that is biased , i.e. , where is known .we will use a quantum walk version of the deutsch - jozsa algorithm .the reason for doing so is that this version of the algorithm shows that the deutsch - jozsa algorithm is analogous to sending a particle through an interferometer that has a large number of paths .this use of the interferometer makes it clear that the quantum resource we are using is just quantum coherence .the graph on which the walk takes place is shown in figure 1 .the tails on the graph are semi - infinite , with the right - hand tail having vertices , , , and the left - hand tail having vertices , , , and so on .and are fourier vertices , and all other vertices simply transmit the particle .there are paths going from vertex to vertex .the rectangles are phase shifters and the one multiplies the state by .the tails , one starting at the vertex and the other starting at the vertex , are semi - infinite.,scaledwidth=50.0% ] we will be using a scattering walk , which is a discrete - time quantum walk . in this type of walk ,the particle sits on the edges , not the vertices , and each edge has two orthogonal states , each corresponding to the particle moving in a particular direction .for example , the edge between and has the states corresponding to the particle being on that edge and moving from to , and the state corresponding to the particle being on that edge and moving from to .to each vertex of the graph corresponds a unitary operator that transforms states entering the vertex into states leaving the vertex .the unitary operator that advances the walk one time step is composed of the combined actions of the unitary operators at the individual vertices . the vertices and are fourier transform vertices and the unitary operators corresponding to them , and respectively , act as the other vertices just transmit the particle , but those with a phase shifter also add a phase factor to the transmitted state , in our case the phases , , will be either or , andthese phases correspond to the output of the boolean function in the deutsch - jozsa algorithm .the phases are promised either to be all the same ( constant ) or half of them are and half are ( balanced ) . our task is to find out which of the two cases we have .we will start the particle in the state , run the walk for 3 steps , and then see whether or not it is in the state . if we find the particle in that state we will conclude the phases were all the same , andif we do not , we will conclude we had the balanced situation . in order to compare the quantum walk result to a classical one , we will assume that classically we are able to sample the phase shifters , i.e. pick some of them and see how they are set , whether to or .then a classical versus quantum comparison will consist of a comparison between the number of phase shifters we sample versus the number of times we have to run the quantum walk .we start the particle making the walk in the state . after two steps , its state is .\ ] ] one more step yields the state the last term is the one that interests us , because it yields the probability that the particle is on the edge between and . if all of the phases , are the same , this probability is just , and if half the phases are and half , then , assuming is even , it is zero .therefore , with a small error of order , which we shall assume we can neglect , we can determine which of these two possibilities we have by measuring the walk after three steps to see whether the particle is between and or not .now we want to introduce decoherence into this system .one way of doing so is to introduce a qubit for each leg of the graph . in particular , let us suppose that all of these qubits are initially in the state .when the particle goes through the vertex , in addition to picking up the phase , the qubit goes from the state to the state , which is a linear combination of the states and , .if we let for and , then the state after two steps is .\ ] ] the reduced density matrix corresponding to this state , where we trace out the ancilla qubits , is given by one of the measures of coherence defined in for a general density matrix , , on an -dimensional space is if we define then we see that note that in the case in which all of the qubit states are the same , then the inner products are independent of and for the case .setting , we then find that . if we now let the walk go one more step , the state is forming a density matrix from this state and tracing out the ancillas gives us the output density matrix for the particle making the walk , , and the probability of finding the particle on the edge between and is note that what this tells is is that the amount of coherence in the system places an upper limit on our ability distinguish the constant and balanced cases . with perfect coherencethe particle always ( up to ) finishes in the state and in the balanced case it never does . when the amount of coherence decreases , the the probability that the constant case will be mistaken for the balanced case increases .this shows that the quantum resource that is being used to accomplish this task is coherence .now let us see what happens to the results of the deutsch - jozsa algorithm as the amount of coherence in the system is decreased in more detail . in particular , we will examine the probability of correctly identifying whether the interferometer is constant or balanced in a fixed number of runs .we shall look at the case that is independent of and for and we shall assume the inner product is real and positive so we can set . if all of the are the same , then \nonumber \\ & \simeq & \nu + o(1/n ) .\end{aligned}\ ] ] if half of the are and half are , then we have that \nonumber \\ & = & \frac{(1-\nu ) n}{(n+1)^{2 } } \nonumber \\ & = & o(1/n ) .\end{aligned}\ ] ] our procedure is to run the walk and measure whether the particle is in the state .if it is , we guess that we have the constant case , and if not , we guess we have the balanced case .we see that as the amount of coherence decreases , our chance of making an error increases .note that the error is almost one - sided .if the particle comes out , we know with very high probability that all of the are the same . however , if it does not come out , and we guess that the are in the balanced configuration , then , assuming the balanced and constant cases are equally likely , we have a chance of of being wrong .classically , looking at one of the phase shifters gives us no information about which of the two cases we have , so for one trial , the quantum case does better . clearly , coherence is a resource in the quantum case , because the more coherence there is in the system , the less likely we are to make a mistake .now let s see what happens with two trials .let s look at the classical case first .we shall call the results of the trials and , where . herewe are denoting the phase shifters by rather than , so a phase of corresponds to and a phase of corresponds to .there are four possible results , , if we sample two of the phase shifters , , , , and .we will assume that the balanced and constant cases are equally likely , and that within the constant category , each value is equally likely . if the results are different we know we have the balanced case .this happens with a probability of ( the probability of the balanced case occurring times the probability of the results being different ) .if we get the same result for each trial , things get a bit more complicated .we want to find , the probability that we have the constant case given that we have the result , and similarly , the probability that we have the balanced case. clearly . to find the probabilities when the results are the same , we use bayes theorem .let us find , where we have specifically indicated which constant value the phase shifter will have .we then have now and . for the denominator we have finally , this gives us that , which implies that .similarly , .our strategy , then , is to guess balanced if the results are different , and constant , if they are the same .our probability of being correct is , i.e. we are always correct if the results are different and are correct with a probability when they are the same .now let us look at the quantum case .we run the walk twice , and we denote the results of the runs by , , and , where denotes we did not find the particle in the state and indicates that we did . neglecting terms of , we have that and and . now making use of bayes theorem we have that in the cases , and , we can conclude that we have the constant case with certainty .if we obtain , then we have now the first of these probabilities is less than or equal to the second , so if our measurement results are , we should always guess balanced .in all other cases we guess constant .doing so , our probability of being wrong is .the quantum error probability will be less than the classical one when or .so in the case of two trials , as long as the amount of decoherence is not too great , the quantum method is better .this can be generalized to trials for . before doing so ,let us be more careful about specifying our ensemble .we are assuming that each of the constant cases occurs with probability , and that the total probability of the balanced case is .within the balanced case , each of the balanced sequences has the same probability .so far , we have assumed that , given the balanced case , this is equivalent to the probability that a particular phase shifter has is , the probability that it has is , and that different phase shifters can be treated as independent .this needs to be justified , and this is done in the appendix .we find that as long as , this assumption is valid . in the classical case ,the only ambiguous situation is if all of the examined phase shifters are found to be the same. we would then guess that we are in the constant situation . in the quantum case ,the only ambiguous case is if the particle is never found between and . we would then guess that we are in the balanced situation .let us have a look at these cases and see what the probability of making a mistake is .in all other situations , the probability of making a mistake is very small .we start with the classical case .denote the probability that we have given that we examined phase shifters and found them to be by . making use of bayes theorem and find that the result for the probability for when we found phase shifters to be is the same . since we will guess the constant case in both these situations , the probability of being wrong is (m1 ) \nonumber \\ & = & \frac{2}{1 + 2^{m-1}}\left ( \frac{1}{4}+\frac{1}{2^{m+1 } } \right ) = \frac{1}{2^{m}}.\end{aligned}\ ] ] now we move to the quantum case , and now denotes the probability that we have the constant case given that the particle was not found in the state in trials. now application of bayes theorem and the fact that gives us now in this case we will guess balanced , so the probability of being wrong is if , then we will have .this tells us how much coherence we need for the quantum method to outperform the classical one .the decision problem we looked at in the previous section was one in which both the quantum algorithm and the classical one had ( almost ) one - sided error . in the classical case , if the interferometer is constant , we will never guess balanced , and in the quantum case , if it is balanced , we will never guess constant . herewe would like to look at a situation in which the quantum algorithm has one - sided error , but the classical one does not .this can give the quantum algorithm a significant advantage if the errors have different costs .we again look at the case where the phase shifts are either or , but we now want to distinguish between the case in which the phase shifts are balanced and the case in which where we assume that . in order to distinguish between these alternatives , our strategies are the same as before . the quantum strategy is to run a quantum walk a certain number of times , and the classical strategy is to sample the phase shifters . in this case, the quantum strategy is the easier one to analyze .let us first consider the situation without decoherence .we know that in the balanced case , the probability of measuring the particle to be in the state after the walk is , up to , zero . in the second case , which we shall refer to as the case ,the probability to find the particle in that state is . in that case , if the walk is run times , the probability of not finding the particle between and is where we have made use of the fact that .therefore , in order to detect this case , that is to find the particle at least once in the state , we need to be at least of order . our strategy is to assume that if we ever find the particle in the state in runs that we have the case , and that we have the balanced case otherwise . if we are given the balanced case , we will always be correct , and if we are given the case and is of order or greater , our probability of error will also be small .if there is decoherence , the effect is simply to replace by , so that as long as is not too small , the effect of decoherence will not be large now we turn to the classical case .we will look at of the phase shifters .let each sampled phase shifter be represented by a variable , where corresponds to and corresponds to .we define if we find we shall assume that we have the case , otherwise we will assume we have the balanced case .therefore , we want to find the probability of making an error .let us start by assuming that we have the balanced case , and we would like to find the probability that we would identify it as the case .we will assume that all of the balanced sequences of phase shifters are equally probable .if we are only sampling of the phase shifters , this is equivalent to assuming that each phase shifter we look at has an equal chance of having and ( see appendix ) .we now want to find the probability that . for this purpose we can use the chernoff bound .it states that if we have the independent random variables , , , where can be either or , and its probability of being is , then for , , and any , then ^{\mu } .\ ] ] in our case , and setting , we find that implies so that ^{m/2 } .\ ] ] assuming and keeping lowest order terms in , we find \simeq -\epsilon^{2}/4 , \ ] ] so that the right - hand side of eq .( [ errorprob ] ) is approximately .similarly , let us suppose that we have the case .we will assume that all sequences of phase shifters satisfying are equally likely .for a subsequence of length , where , this is equivalent to assuming that each element has a probability of to be and to be ( see appendix ) .we now want to find the probability that we would identify this as the balanced case , which is the same as finding .we can now use the following version of the chernoff bound . with the same conditions as before , we now have that and implies , which further implies that finally , keeping only lowest order terms in , we find that summarizing , we see the following . for both the quantum and classical methods ,the condition for keeping the error small is the same , should be at least of order .however , up to , the quantum error is one - sided , if we have the balanced case , we will not mistake it for the case .for the classical method , the error is two - sided , we can mistake each case for the other .therefore , if we are in a situation in which the cost of mistaking the balanced case for the case is large , the quantum method has an advantage .note that for this situation , deciding between the balanced and cases , the type of decoherence we are considering does not affect the fact that the quantum error is one sided , but it will cause the number of runs that we need to make in the quantum case , which is of order , to increase .the reason it does not affect the one - sidedness of the error , is that the decoherence respects the symmetry of the problem ; it is the same for each branch of the interferometer .this suggests that for some problems for which coherence is a resource , not only its total quantity , but its properties will play a role .we have examined the role played by coherence as a resource in the deutsch - jozsa and related algorithms .the deutsch - jozsa algorithm is a means of solving a decision problem , in particular , deciding between two alternatives . in its ideal form, it provides an answer in a single run , whereas classically in the worst case an exponential number of runs would be necessary .decoherence degrades the ability of the algorithm to decide between the alternatives , and the smaller the amount of coherence in the system , the worse the ability of the algorithm to distinguish between the two cases .this demonstrates that coherence is a resource for this algorithm .we also looked at the deutsch - jozsa algorithm in a probabilistic setting , and found that as long as there is enough coherence present , there is a quantum advantage in that for a fixed number of measurements , one has a higher probability of making the correct decision using quantum means than by using classical ones . by looking at a related decision problem, we found an example in which the number of measurements one makes is comparable for the classical and quantum cases , at least if the coherence in the quantum case remains high enough , but while the classical procedure has two - sided error , the quantum procedure has one sided error .this research was supported by a grant from the john templeton foundation .i would like to thank seth cottrell , emilio bagan , and janos bergou for useful conversations .we now need to justify what we did in sections iii and iv . in the our ensemble in section iii ,each balanced sequence of length occurred with equal probability .a related ensemble occurred in section iv .we want to show the following .we consider an ensemble of sequences of length consisting of , in which each sequence has ones and minus ones .each of these sequences has the same probability .we now consider fixed subsequences of these sequences of length , e.g. the first elements of each sequence of length .we want to show that the probability of a subsequence with ones and minus ones , where , is the same as if each location in the subsequence has a probability of containing a one and a probability of containing a minus one . for convenience , we will consider subsequences consisting of the first places of the sequences of length . the probability , that the subsequence has ones is where now we shall assume that is much less than , , and and apply the stirling approximation , .we then have we can approximate the last factor by taking its logarithm and expanding in this gives us applying this relation to and !/[(1-p)n]!$ ] and substituting into the expression for , we obtain this gives us which is what we would obtain if we assumed that in the sequence of length one occurred with probability and minus one occurred with probability .99 t. baumgratz , m. cramer , and m. b. plenio , phys .lett . * 113 * , 140401 ( 2014 ) .d. m. greenberger and a. yasin , phys .a * 128 * , 391 ( 1988 ) .g. jaeger , a. shimony , and l. vaidman , phys .a * 51 * , 54 ( 1995 ) .g. englert , phys .lett . * 77 * , 2154 ( 1996 ) .s. drr , phys .a * 64 * , 042113 ( 2001 ) .b- . g. englert and j. bergou , opt .commun . * 179 * , 337 ( 2000 ) .m. jakob and j. a. bergou , phys . rev .a * 76 * , 052107 ( 2007 ) .m. n. bera , t. qureshi , m. a. siddiqui , and a. k. pati , phys .a * 92 * , 012118 ( 2015 ) .d. deutsch and r. jozsa , proc .royal doc .london a * 439 * , 553 ( 1992 ) .m. hillery , j. bergou , and e. feldman , phys .a * 68 * , 032314 ( 2003 ) .r. motwani and p. raghavan , _ randomized algorithms _ , ( cambridge university presss , cambridge , 1995 ) .
that superpositions of states can be useful for performing tasks in quantum systems has been known since the early days of quantum information , but only recently has a quantitative theory of quantum coherence been proposed . here we apply that theory to an analysis of the deutsch - jozsa algorithm , which depends on quantum coherence for its operation . the deutsch - jozsa algorithm solves a decision problem , and we focus on a probabilistic version of that problem , comparing probability of being correct for both classical and quantum procedures . in addition , we study a related decision problem in which the quantum procedure has one - sided error while the classical procedure has two - sided error . the role of coherence on the quantum success probabilities in both of these problems is examined .
in superconducting magnets , the design of an adequate quench protection system based on an internal dump of the magnet stored energy depends mainly on the growth rate of the resistance that builds up in the magnet following a quench . to determine the resistance growth rate, one usually turns to study the initiation and dynamics of the normal zone , the region within which the superconductor exhibits normal behavior , in the conductor . the normal zone propagation in adiabatic conditionscan be described by two parameters , the longitudinal velocity , the rate with which the normal zone expands along the axis of the conductor , and the transverse velocity . is much easier to measure in an experiment and a computation of it can be done either numerically or by utilizing some of the analytical formulae available in the literature ( see , for example , ) .moreover , can be linearly approximated by where and are the transverse and longitudinal thermal conductivities of the superconductor , respectively .hence , a satisfying description of the normal zone propagation can be obtained by knowing . in this work, we present a novel numerical calculation of in nbti / cu rutherford cables surrounded by a normal metal cladding .such conductors are used in many existing detector magnets , such as the famous atlas and cms magnets at cern , and will be utilized by the future iaxo experiment . the numerical calculationexploit the commercial fea software comsol to simulate the propagation of a normal zone in a two dimensional adiabatic conductor .the model accounts for both the current sharing process between the superconductor and the stabilizer , as well as for the heat propagation over time and space along the conductor . hence , it yields the influence of both the temperature and the magnetic field on .in addition , we study the influence of the thickness of the cladding on for varying magnetic field and operating current .this allows us to present a good estimation of the longitudinal normal zone propagation velocity for a very broad variety of highly stabilized superconductors in many existing and also , more importantly , for future magnets . to complete our analysis, we introduce an analytical formula to calculate , following a previous idea by mints et al .this formula allows one to approximate by taking into account the thermal diffusion as well as the current redistribution in the conductor .we apply the formula in the same scenarios as in the numerical model to aid us in analyzing and interpreting the results of the numerical calculation and also to give a further justification to the numerical results when experimental data is unavailable .in the core of the numerical computation is a solution to the heat diffusion equation that takes into account the current redistribution process in the conductor .the heat balance equation for a unit volume of conductor is given by where is the mass density , is the specific heat , is the thermal conductivity , is the electrical resistivity , is the current density and represents the external energy disturbance . as the temperature in the conductor rises above the current - sharing temperature , the current starts diffusing from the superconductor into the normal metal , thereby continuously changing the heat generation term in eq .( [ heatbalance ] ) . to include this effect in the heat generation term, one simply introduces ampere s law into the heat balance equation . assuming the electromagnetic behavior of the conductor is similar to that of a set of infinite plates ( see sec .[ modgeo ] ) , we regard the current carrying cable as an infinite current carrying sheet . thus , the displacement current is omitted from the maxwell - ampere equation and one can easily obtain an equation to describe the magnetic diffusion from the coupled set of eqs .[ heatbalance ] ( utilizing ampere s law ) and [ difm ] , a full description of the temperature and magnetic field distributions in a current carrying object can be obtained .as we seek for the behavior of as a function of and , we must account for the dependence of the material properties of both these parameters . to relax the computational cost , we assume the rutherford cable is made of a single , homogenous , material and average and over the cross - section of the cable .the effective electrical density is obtained by viewing the superconductor - copper system as two resistors connected in parallel .the material properties of the cladding are those of the normal metal comprising it .the effective resistivity of the conductor is obtained by treating the cladding and rutherford cable as two resistors connected in parallel .the material properties were obtained from the matpro library and fitted to a polynomial function .the different fits cover a magnetic field range of 0 - 5 t for a temperature range of 0 - 300 k. the residual resistivity ratio ( rrr ) we chose for the aluminum in the cladding and the copper are 1500 and 100 , respectively .the minimal dimensionality of quench study models depends on the ratio between the time scales of the thermal and magnetic diffusions .the thermal diffusion time scale is given by , where is the length of the region in which the quench - driving heat release occurs , is the propagation velocity and is the thermal diffusivity of the cladding , taken at .the time scale of the transverse magnetic diffusion is defined as , where is the thickness of the cladding and the index refers to cladding metal . as depends implicitly on the current through the velocity , for high currents is generally finite with respect to .thus , a sufficient understanding of the problem requires a 2d model .notice , however , that a solution to this problem can also be obtained by coupling a 2d magnetic model to a 1d thermal model . the 2d model of the conductoris shown in fig .[ geometria ] .we assume that the length scale in the direction is infinitely larger than any other length scale in the problem and the reduction to an equivalent 2d problem is thus done by considering a slice of the conductor on the plane .the solution is restricted to the and region . in the direction, we refer to the thickness of the cable as and to that of the cladding as .the mesh of the geometry uses rectangular 2d elements and is shown in fig .[ comsolmesh ] .the elements are constructed so that they are finer in the vicinity of the boundary between the cable and the cladding , at . in the directionthe elements are getting coarser with increasing , so that near the origin , where the initial perturbation takes place and the temperature gradient is high , the mesh is considerably finer .the model consists of 11000 domain elements and 2022 boundary elements for m. an immediate consequence of the 2d description of the problem is that the conductor can be approximated by a set of infinite plates , where the cable is seen as an infinite current carrying sheet with initial total current .thus , the magnetic field in the conductor prior to the external energy release has the form where is the engineer current density an is an external magnetic field representing the contribution from other current sources in the coil .on the interface between the composite material and the cladding we demand the continuity of the magnetic field and its flux . on the external boundaries ,the magnetic field is defined as and .the model assumes full adiabaticity .the initial value condition for the temperature reads in the bulk of the conductor .the temperature and its flux are also assumed to be continuous along , so that , and that , where the indices and correspond to the composite material region and the cladding region , respectively , and is the unit vector orthogonal to the boundary . the external energy input can be computes in several ways .we chose to represent the energy pulse by a guassian shape in space and exponential decay in time .the disturbance is given by a power density function where with mm being the full width at half maximum of the gaussian , and = 0.005 sec . from , where is a 1d energy density, we get w / m .thus , the initial quench energy can be estimated by multiplying by an appropriate characteristic length , such as the coil width .next , it is worth writing an analytical formula to describe the longitudinal normal zone propagation .although due to the strong coupling between the heat and magnetic diffusion equations an exact solution is practically impossible , a good approximation can be obtained in a simple manner . a well known technique to deal with the heat diffusion equation in a 1d adiabatic system can be found in .although current redistribution is not taken into account , this technique can provide a good first approximation for low currents , where the current redistribution can be regarded as instantaneous . in this case , a solution to the heat diffusion equation yields the following longitudinal propagation velocity , where is the total cross - section of the conductor and the material properties are taken as an average across .when the operating current is high , the assumption that the current is immediately redistributed into the cladding is no longer valid . to distinguish between the high and low current regimes we look at the ratio between the characteristic times associated with the normal zone propagation and the magnetic diffusion we define the low current regime , where the current can be regarded as immediately redistributed into the cladding , for .similarly , when current redistribution must be explicitly taken into account .then , in the vicinity of the transition front the current remains confined to a certain small fraction of the cladding around the superconductor .this leads to a non - uniform quench - driving heat release , which accelerates the propagation velocity .the transition takes place at around 10 ka for most highly stabilized cables at their operating points .a simple way to approximate this scenario is by considering the joule heating term as resulting from a uniform current flowing solely in a confined area within the conductor .this ansatz introduces the effect of current redistribution into the propagation velocity by solving only the heat balance equation . to find an expression for may study its asymptotic behavior .the cross - section area of the region where the current flows is determined by .when is large the current is practically instantaneously redistributed in the conductor and we expect that . on the other hand , for small current penetrates only a thin layer of the cladding around the cable .therefore , . in a similar manner to boxman , we suggest the following expression for the effective area which carries the current ~.\ ] ] when considering the joule heating to be generated only within , the expression that describes the normal zone propagation velocity changes accordingly and takes the following form where is the effective electrical resistivity calculated from the material properties in the latter are taken at .the results of the analytical approximation and the numerical comsol model are presented along with two sets of measurements data .the measurements were done on two highly stabilized conductors , the b0 and b00 coils of the atlas magnet test facility at cern , for different operating currents .this data , however , is not sufficient to fully verify our numerical and analytical models , as we are also interested in the behavior of the velocity for different cladding thicknesses . nonetheless , the comparison between the models and the data gives a good indication that the model provides satisfying results .a measurement of the propagation velocity for different operating currents is a straightforward way to gain insight on the normal zone behavior of the conductor . in fig .[ data ] , a comparison between the comsol simulation , the measurements data , the analytical formula and wilson s approximation are presented .the velocity increases exponentially with the current and shows an asymptotic behavior as . the close agreement between the analytical and numerical models and the measurements of the propagation velocitycan be appreciated from the graphs .another fact evident from the graphs is the breakdown of wilson s solution for high currents , where current redistribution is becoming significant .our analytical approximation provides a better match to the data for a wide range of operating currents .one can notice how both wilson s approximation and our analytical formula converge at low currents , where .the simulation results are generally higher than the measurement data due to the adiabatic boundary conditions of the numerical model , which assume the conductor to be a closed system .the behavior of the propagation velocity with respect to the thickness of the stabilizer is less obvious than the vs. behavior .[ resultados ] shows a series of plots , where the propagation velocity is plotted as a function of the stabilizer thickness for different operating currents and magnetic fields . in each plot , the numerical results , the analytical approximation and the wilson solution are shown .although no measurements on the behavior of the propagation velocity with respect to the geometry of the stabilizer are shown in the plots , we do expect , based on fig .[ data ] , that the results generally represent a correct behavior .some insight can be gained by examining fig .[ resultados ] . for low currents ,there is a very good agreement between our analytical and numerical models . for higher currents ,this agreement breaks and some deviations between the two models appear .the general behavior of the plots can be explained , however , by examining the analytical formula , eq .( [ vmb ] ) .this general behavior is in fact similar to both models .the dynamics of the propagation velocity dependence on has three characteristics . 1 .first , for the dominant term is the current density , see eq .( [ vmb ] ) . as increases from zero , becomes smaller and leads to a small decrease in propagation velocity .second , when , and for small enough values of , the ongoing decrease in current density is compensated by a change in material properties , as the average values of the different material properties are more influenced by the presence of the al stabilizer . because of the large specific heat and thermal conductivity , and small resistivity , the cladding metal acts as a heat sink , therefore increasing the velocity .3 . third , for large values of , the cladding matrix becomes the dominant element in the conductor and practically determines the average material properties of the conductor .hence , the material properties have practically a constant value and the al stabilizer has fulfilled its potential for acting as a heat sink .then , the current density , that goes down with , becomes once more the dominant factor and thus reduces again the velocity .the behavior of longitudinal normal zone propagation velocity was analyzed with respect to the thickness of the metal cladding of the rutherford cable for a wide range of currents and magnetic fields .this provides a good estimation of the normal zone propagation for a variety of superconducting magnets .the results and the physics behind them were explained and analyzed . we have shown that for the current remains confined to a small area around the composite material , limiting the cladding s contribution to the normal zone propagation .since the cladding is thick enough to act as a heat sink , the heat generated in the composite material quickly propagates into the cladding .this , in turn , accelerates the normal zone propagation because the heat generation that contributes to the normal zone propagation is formed almost exclusively within the rutherford cable , which carries almost all of the current .although we did not address the issue of minimum quench energy ( mqe ) , we intend to do so in the future .our results are not effected by this as the propagation velocity is constant for any initial energy release .in addition , our calculation can be expanded to a 3d and thus include the transverse velocities as well .e. w. boxman , m. pellegatta , a. v. dudarev , and h. h. j. ten kate , current diffusion and normal zone propagation inside the aluminum stabilized superconductor of atlas model coil , _ ieee trans . applied superconductivity _13 , p. 1685( 2003 ) .a. foussat , n. dolgetta , a. dudarev , c. mayri , p. miele , z. sun , h. h. j. ten kate , and g. volpini , mechanical characteristics of the atlas b0 model coil , _ ieee trans . applied superconductivity _13 , p. 1246
the stability of high - current superconductors is challenging in the design of superconducting magnets . when the stability requirements are fulfilled , the protection against a quench must still be considered . a main factor in the design of quench protection systems is the resistance growth rate in the magnet following a quench . the usual method for determining the resistance growth in impregnated coils is to calculate the longitudinal velocity with which the normal zone propagates in the conductor along the coil windings . here , we present a two dimensional numerical model for predicting the normal zone propagation velocity in aluminum stabilized rutherford nbti cables with large cross section . such conductors comprise a superconducting cable surrounded by a relatively thick normal metal cladding . by solving two coupled differential equations under adiabatic conditions , the model takes into account the thermal diffusion and the current redistribution process following a quench . both the temperature and magnetic field dependencies of the superconductor and the metal cladding materials properties are included . unlike common normal zone propagation analyses , we study the influence of the thickness of the cladding on the propagation velocity for varying operating current and magnetic field . to assist in the comprehension of the numerical results , we also introduce an analytical formula for the longitudinal normal zone propagation . the analysis distinguishes between low - current and high - current regimes of normal zone propagation , depending on the ratio between the characteristic times of thermal and magnetic diffusion . we show that above a certain thickness , the cladding acts as a heat sink with a limited contribution to the acceleration of the propagation velocity with respect to the cladding geometry . both numerical and analytical results show good agreement with experimental data .
breast cancer is still one of the most common forms of cancer among women , despite a significant decrease has occurred in the breast cancer mortality in the last few decades .mammography is widely recognized as the most reliable technique for early detection of this pathology . however , characterizing the massive lesion malignancy by means exclusively of a visual analysis of the mammogram is an extremely difficult task and a high number of unnecessary biopsies are actually performed in the routine clinical activity .computerized methods have recently shown a great potential in assisting radiologists in the malignant vs. benign decision , by providing them with a second opinion about the visual diagnosis of the lesion .the computer - aided diagnosis ( cadi ) system we present is based on a three - stage algorithm : 1 ) a segmentation technique identifies the contour of the massive lesion on the mammogram ; 2 ) several features , based on size and shape of the lesion , are computed ; 3 ) a neural classifier analyzes the features and outputs a likelihood of malignancy for that lesion .the segmentation method is a gradient - based one : it is able to identify the mass boundaries inside a physician - located region of interest ( roi ) image .the algorithm is based on the maximization of the local variance along several radial lines connecting the approximate mass center to the roi boundary .the critical points maximizing the local variance on each radial line are interpolated , thus a rough mass shape is identified .the procedure is iterated for each point inside the approximate mass , resulting in a more accurate identification of the mass boundary .the main advantage of this segmentation technique is that no free parameters have to be fitted on the dataset to be analyzed , thus it can in principle be directly applied to datasets acquired in different conditions without any ad - hoc modification .sixteen features are computed for each segmented mass , some of them being more sensitive to the shape and some to the texture of the lesion .they are : area , perimeter , circularity , mean and standard deviation of the normalized radial length , radial distance entropy , zero crossing , maximum and minimum axes , mean and standard deviation of the variation ratio , convexity ; mean , standard deviation , skewness and kurtosis of the grey - level distribution .the features are analyzed by a multi - layered feed - forward neural network trained with the error back - propagation algorithm .the classifier performances are evaluated according to the 5 cross validation method .in this work we present the results obtained on a dataset of 226 massive lesions ( 109 malignant and 117 benign ) extracted from a database of mammograms collected in the framework of a collaboration between physicists from several italian universities and infn sections , and radiologists from several italian senological centers . despite the boundaries of the massesare usually not very sharp , our segmentation procedure leads to an accurate identification of the mass shapes both in malignant and benign cases , as shown in fig .[ fig : masses ] .the performances of the neural network in classifying the features extracted from each mass have been evaluated in terms of the sensitivity and the specificity on the test sets : the average values obtained are 78.1% and 79.1% respectively .the discriminating capability of the system has been evaluated also in terms of the receiver operating characteristic ( roc ) analysis ( see fig . [fig : froc ] ) .the estimated area under the roc curve is .mass segmentation plays a key role in cadi systems to be used for supporting radiologists in the malignant vs. benign decision .we developed a robust technique based on edge detection to segment mass lesions from the surrounding normal tissue .the results so - far obtained in the classification of malignant and benign masses indicate that the segmentation procedure we developed provides an accurate approximation of the mass shapes and that the features we took into account for the classification have a good discriminating power .we are grateful to dr m. tonutti from cattinara hospital ( trieste , italy ) for her essential contribution to the present analysis .we acknowledge dr s. franz from ictp ( trieste , italy ) for useful discussions .landis _ et al _ , cancer statistics , 1999 .ca - cancer j clin * 49*(1 ) , 8 ( 1999 ) .chen _ et al _ , diagnosis of breast tumors with sonographic texture analysis using wavelet transform and neural networks , ultrasound med biol * 28*(10 ) , 1301 ( 2002 ) .r. bellotti _ et al _ , the magic-5 project : medical applications on a grid infrastructure connection , ieee nss conf rec * 3 * , 19021906 ( 2004 ) .metz , roc methodology in radiologic imaging , invest radiol * 21*(9 ) , 720 ( 1986 ) .
evaluating the degree of malignancy of a massive lesion on the basis of the mere visual analysis of the mammogram is a non - trivial task . we developed a semi - automated system for massive - lesion characterization with the aim to support the radiological diagnosis . a dataset of 226 masses has been used in the present analysis . the system performances have been evaluated in terms of the area under the roc curve , obtaining . = 11.6pt
lisa - laser interferometric space antenna - is a proposed mission which will use coherent laser beams exchanged between three identical spacecraft forming a giant ( almost ) equilateral triangle of side kilometres to observe and detect low frequency cosmic gw . in ground based detectors the arms are as symmetrical as possible so that the laser light experiences nearly identical delay in each arm of the interferometer which reduces the laser frequency / phase noise at the photodetector. however , in lisa , the lack of symmetry will be much larger than in terrestrial instruments .laser frequency noise dominates the other secondary noises , such as optical path noise , acceleration noise by 7 or 8 orders of magnitude , and must be removed if lisa is to achieve the required sensitivity of , where is the metric perturbation caused by a gravitational wave . in lisa , six data streams arise from the exchange of laser beams between the three spacecraft approximately 5 million km apart .these six streams produce redundancy in the data which can be used to suppress the laser frequency noise by the technique called time - delay interferometry ( tdi ) in which the six data streams are combined with appropriate time - delays .this work was put on a sound mathematical footing by showing that the data combinations constituted an algebraic structure ; the data combinations cancelling laser frequency noise formed the _ module of syzygies _ over the polynomial ring of time - delay operators .the module was obtained - that is its generators were obtained - for the simple case of stationary lisa in flat spacetime .these were the so - called first generation tdi .however , lisa spacecraft execute a rotational motion , the arm - lengths change with time and the background spacetime is curved , all of which affect the optical links and the time - delays .the rotation gives rise to the sagnac effect which implies that the up - down optical links are unequal , the arm - lengths or the time - delays change with time - flexing of arms .these effects can not be ignored if the laser frequency noise is to be effectively cancelled . in this paper , we compute the orbits of spacecraft in the newtonian framework where the earth s gravitational field is also taken into account .the base orbits we take to be keplerian in the gravitational field of the sun only . on these base orbits , we linearly superpose the perturbative effect of the earth s gravitational field .we choose the earth over jupiter because ( i ) the earth perturbs the keplerian orbit in resonance , resulting in a secular growth of the perturbations and , ( ii ) jupiter s effect is less than 10 of that of the earth s on the flexing and hence not dominant .the perturbative analysis is carried out within the clohessy - wiltshire ( cw ) framework .further , an extension of the previous algebraic approach is proposed for the general problem in which the time - delay operators in general do not commute ; this leads to the second generation tdi and imperfect cancellation of laser frequency noise .however , we show that there are symmetries in the physical model which can simplify to some extent the totally non - commutative problem. these computations will be useful in the development of a lisa simulators , the lisacode for instance .the keplerian orbits , the orbital motion in the gravitational field of the sun only are chosen so that the peak to peak variation in armlengths is the least km , see .we summarise the results of paper below .we choose the sun as the origin with cartesian coordinates as follows : the ecliptic plane is the plane and we consider a circular reference orbit of radius 1 a. u. centred at the sun .let where and km is a constant representing the nominal distance between two spacecraft of the lisa configuration .we choose the tilt of the plane of the lisa triangle to be which has been shown to yield minimum flexing of the arms .we choose spacecraft 1 to be at its lowest point ( maximum negative z ) at .this means that at this point , and .the orbit of the first spacecraft is an ellipse with inclination angle , eccentricity and satisfying the above initial condition . from the geometry , and obtained as functions of , _ 0 & = & , + e & = & ^1/2 - 1 .[ eq : eincl ] the equations for the orbit of spacecraft 1 are given by : x_1 & = & r(_1 - e ) _ 0 , + y_1 & = & r _ 1 , + z_1 & = & -r(_1 - e ) _ 0 .[ tltorb ] the eccentric anomaly is implicitly given in terms of by , _t - _ 0 , [ ecc ] where is the time and is the average angular velocity and the initial phase .the orbits of the spacecraft 2 and 3 are obtained by rotating the orbit of spacecraft 1 by and about the ; the phases , however , must be adjusted so that the spacecraft are at a distance from each other .the orbital equations of spacecraft are : x_k & = & x_1 _ k- y_1 _ k , + y_k & = & x_1 _ k + y_1 _ k , + z_k & = & z_1 , [ orbits ] where , with the caveat that the is replaced by the phases , where they are implicitly given by , _k - e _ k = t-_k - _ 0 . [ eq : psik ] these are the exact ( keplerian ) expressions for the orbits of the three spacecraft in the sun s field .the earth s field is now included perturbatively using the cw framework . the cw frame is chosen as follows : we take the reference particle to be orbiting in a circle of radius with constant keplerian angular velocity . then the transformation to the cw frame from the barycentric frame is given by , the direction is normal and coplanar with the reference orbit , the direction is tangential and comoving , and the direction is chosen orthogonal to the orbital plane .linearised dynamical equations for test - particles in the neighbourhood of the reference particle are easily obtained .since the frame is noninertial , coriolis and centrifugal forces appear in addition to the tidal forces . with the help of the cw formalism , it is easy to see that to the first order in ( or equivalently ) there exist configurations of spacecraft so that the mutual distances between them remain constant in time .the flexing appears only when we consider second and higher order terms in .in fact in we have shown that the second order terms describe the flexing of lisa s arms quite accurately as compared to the exact keplerian orbits .the cw equations for a test particle are given by : -2 - 3 ^2 x & = & 0 , + + 2 & = & 0 , + + ^2 z & = & 0 .[ gde2 ] we choose those solutions of eq.([gde2 ] ) which form an equilateral triangular configuration of side ( such solutions exist ) . for the spacecraftwe have the following position coordinates : x_k & = & - _ 0 ( t - _ k - _ 0 ) , + y_k & = & _ 0 ( t - _ k - _ 0 ) , + z_k & = & -_0 ( t - _ k - _ 0 ) , [ cws ] where . also at the initial phase of the configuration is described through . in this solution ,any pair of spacecraft maintain the constant distance between each other .lisa follows the earth behind .we consider the model where the centre of the earth leads the origin of the cw frame by - thus in our model , the ` earth ' or the centre of force representing the earth , follows the circular reference orbit of radius 1 a. u. also the earth is at a fixed position vector in the cw frame .we find that km , km and .the acceleration field due to the earth at any point ( in particular at any spacecraft ) in the cw frame is given by : ( ) = - g m _ , where kg is the mass of the earth and newton s gravitational constant . in order to write the cw equations in a convenient formwe first define the small parameter in terms of the quantity , where is the distance of the earth from the origin of the cw frame ; km which is more than 50 million km .so when deriving the forcing term we make the aprroximation , that is , we neglect compared to .it will turn out that the flexing due to the earth is small so that this approximation is not unjustified .we define which is the just the ratio of the tidal forces due to the earth and the sun .the cw equations including the earth s field take the form : -2 - 3 ^2 x + ^2 ( x - x _ ) & = & 0 , + + 2 + ^2 ( y - y _ ) & = & 0 , + + ^2 ( 1 + ) z & = & 0 .[ cwearth ] note that the compounded flexing due to the combined field of earth and sun is a nonlinear problem ; it is infact a three body problem .we however solve this problem approximately .assuming that both effects are small we may linearly add the flexing vectors due to the sun and earth ; that is , add the perturbative solutions obtained from eqs.([gde2 ] ) and ( [ cwearth ] ) ; the nonlinearities appear at higher orders in and .these would modify the flexing but we may neglect this effect because of the smallness .we find that the flexing produced by the earth is of the order of 1 or 2 m / sec upto the third year , just about 40 of that due to the sun .but , as shown in the flexing produced by the sun s octupole field is nearly exact to that produced by the keplerian orbits .thus we may do better by just adding the flexing vector produced by the earth to the keplerian orbit of the relevant spacecraft .we then seek perturbative solutions to eq .( [ cwearth ] ) to the first order in .we write , where are solutions at the zeroth order given by eq.([cws ] ) .we put ( or equivalently include it in ) in these solutions for simplifying the algebra . with the initial conditions : at , we have the results : x_1 & = & - _ ( t - _ ) + x _ + 2 y _ t - 2 _ 0 _ 0 + _ 0 t ( t - _ 0 ) , + y_1 & = & 2 _ [ ( t - _ ) + _ ] - _ 0 [ ( t - _ 0 ) + _ 0 ] + & & + _ 0 t ( t - _ 0 ) - t ( 2 x _ - 3 _ 0 _ 0 ) - ^2 t^2 y _ , [ solnxy ] where , _ ^2 & = & ( x _ - 2 _ 0 _ 0)^2 + ( 2 y _ - _ 0_ 0)^2 , + _ & = & .the equation can be exactly integrated and used directly to obtain the flexing .however , we can also expand this solution to the first order in and the result is : z_1 = _ 0 [ t t _ 0 - ( t t - t ) _ 0 ] .[ solnz ] as argued before , we add the perturbation given by to the keplerian orbit of each spacecraft .next we compute the optical links .the time - delay that is required for the tdi operators needs to be known very accurately - at least to 1 part in , that is , to about few metres - for the laser frequency noise to be suppressed . in order to guarantee such level of accuracy, we numerically compute the optical links or the time - delay .this approach is guaranteed to give the desired accuracy or even better accuracy than what is required .we numerically integrate the null geodesics followed by the laser ray emitted by one spacecraft and received by the other .this computation is performed in the barycentric frame , and taking into account the fact that the spacetime is curved by the sun s mass only ( the earth s contribution is about 5 orders of magnitude less ) .the computation here is further complicated by the fact that the spacecraft are moving in this frame of reference and the photon emitted from one spacecraft must be received by the other spacecraft .we use the runga - kutta numerical scheme to integrate the differential equations describing the null geodesics .but since the end point of the photon trajectory is not known apriori , an iterative scheme must be devised for adjusting the parameters of the null geodesic , in order that the worldlines of the photon and the receiving spacecraft intersect .we have devised such a scheme based on the difference vector between the photon position vector and receiving spacecraft position vector .the six optical links have thus been numerically computed with sufficient accuracy required for tdi .the code gives results accurate to better than 10 metres - most of the time better than metres - except in a window of about half an hour when the error exceeds this value and becomes unacceptably large .we display the results in figures [ optlink ] and [ dplr ] for .more details may be found in .figure [ optlink ] shows all the six optical links in the combined field of the sun and earth .we also need to estimate the variation in armlength which is important for the tdi analysis to follow .figure [ dplr ] shows the rate of change of the six optical links as a function of time over a period of three years .we find that in the optimised model of lisa configuration , this rate of change is less than 4 m / sec .if we just consider the sun s field . including the earth s field the flexing still remains m / sec in the first two years and increases to m / sec in the third year .earlier estimates were m / sec .these numerical estimates are most crucial for their effect on residual laser frequency noise in the tdi .in order to cancel the laser frequency noise , time - delayed data streams are added together in which an appropriate set of time - delays are chosen . in generalthe time - delays are multiples of the photon transit time between pairs of spacecraft . in a scheme based on modules over commutative ringswas given where the module of data combinations cancelling the laser noise was constructed .this fully cancels the laser frequency noise for stationary lisa .there are only three delay operators corresponding to the three armlengths and the time - delay operators commute .this scheme can be straight forwardly extended to moving lisa , where , now because of sagnac effect , the up and down optical links have different armlengths but the armlengths are still constant in time . in this case , there are six delay operators corresponding to the six optical links and they commute. these are the modified but still first generation tdi . however , for lisa the armlengths do change with time - flexing of the arms - and the first generation tdi modified or otherwise do not cancel of the laser frequency noise sufficiently .we follow the notation and conventions of and which are the simplest for our purpose .the six links are denoted by .the time - delay operator for the link from s / c 1 to s / c 2 or is denoted by in and so on in a cyclic fashion .the delay operators in the other sense are denoted by ; the link from by and similarly the links are defined through cyclic permutation .let represent the laser frequency noise in s / c .let be the delay operator corresponding to the variable armlength , i.e. . then we have , u^1 & = & c_1 - z c_3 , + v^1 & = & l c_2 - c_1 . the other links in terms of obtained by cyclic permutations .also in the we have not included contributions from the secondary noises , gravitational wave signal etc . since hereour aim is to deal with laser frequency noise only .any observable is written as : x = p_i v^i + q_i u^i , where are polynomials in the variables .thus is specified by giving the six tuple polynomial vector .writing out the in terms of the laser noises , and in order that the laser frequency noise cancel for arbitrary , the polynomials must satisfy the equations : p_1 - q_1 + q_2 x - p_3 n & = & 0 , + p_2 - q_2 + q_3 y - p_1 l & = & 0 , + p_3 - q_3 + q_1 z - p_2 m & = & 0 .[ lneq ] the solutions to these equations as realised in earlier works are important , because they consist of polynomials with lowest possible degrees and thus are simple .since these are linear equations they define a homomorphism of modules and the solutions themselves form a module - the _ module of syzygies _ over the polynomial ring , where is the field of rational numbers and play the role of indeterminates . in general , the variables ( operators ) do not commute and hence the order of the variables is important .however , if we assume in a simple model that the arms do not flex , then the operators commute , and the generators of the module have been found via grbner basis methods . however , when the arms flex , the operators no longer commute .if we operate on with operators and in different orders , it is easily seen that .a combinatorial approach has been adopted in to deal with the totally non - commutative case . however , our aim here is to estimate the level of the non - commutativity of these operators in the context of our lisa model and use the symmetries to simplify the algebraic approach .the level of non - commutativity can be found by computing commutators which occur in several of the well known tdi observables like the michelson , sagnac etc .we find that given our model of lisa , we require to go only upto the first order in ; we find for our model metres / sec and thus even if one considers say 6 successive optical paths , that is , about seconds of light travel time , metres .this is well below few metres and thus can be neglected in the residual laser noise computation .moreover , terms ( and higher order ) can be dropped since they are of the order of ( they come with a factor ) which is much smaller than 1 part in .the calculations which follow neglect these terms . applying the operators twice in succession and dropping higher order terms as explained above , k_2 k_1 c & = & c(t - l_k_1 ( t - l_k_2 ) - l_k_2 ) , + & & c(t -l_k_1 - l_k_2 ) + l_k_2 l_k_1 c ( t -l_k_1 - l_k_2 ). the above formula can be easily generalised by induction to operators .we now turn to the commutators of the operators .the term in cancels out ; only the term remains .we list below a few of the commutators : jk - kj & = & l_j _ k - l_k _ j , + lmjk - jklm & = & ( l_l + l_m)(_j + _ k ) - ( l_j + l_k)(_l + _ m ) , + lmnxyz - xyzlmn & = & ( l_l + l_m + l_n)(_x + _ y + _ z ) + & & - ( l_x + l_y + l_z)(_l + _ m + _ n ) .[ commut ] we observe the following approximate symmetries in our model : _ x _ l , _ y _ m , _ z _ n , [ comm ] which also implies ( this combination occurs in the sagnac observables ) , _ x + _ y + _ z _ l + _ m + _ n . [ cyclic ] infact in our model , m / sec and m / sec upto the first three years in our model .the same is essentially true for the pairs of links and .thus these pairs of operators essentially commute .thus , we are not dealing with a set of totally non - commuting variables , but with an intermediate case .in addition to these approximate symmetries there are other commutators which vanish ` identically ' ( after dropping terms in and and higher order ) .it can be easily verified that commutators of the form , , ~ n \geq 2 ] . the quotient ring is clearly much smaller and simpler and the solution to eq .( [ lneq ] ) is sought for polynomials in this quotient ring .the solution set of polynomial vectors still form a module over .the future goal is to ` construct ' this module . by the time lisa flies the expectationsare for the laser frequency noise estimate to reduce to say .if we divide this number by the laser frequency hz , we obtain the noise estimate in the fractional doppler shift with the power spectral density ( psd ) : s_c ( f ) = |c ( f)|^2 ~10 ^ -27 hz^-1 , [ lsr_noise ] where is the fourier transform of . then by differentiating ,the psd of the random variable is just hz . the modified sagnac first generation tdi observable is given by the polynomial vector in the form by : = ( , l , lm , , zy , z ) , [ sgnc ] where and . if the variables commute then the laser frequency noise is fully cancelled .however , if they do not commute , there is a residual term .it can be computed as : c = _ 1 c_1 + _ 2 c_2 + _ 3 c_3 .we find that and $ ] and so by eq .( [ commut ] ) : t ( t ) = [ ( l_x + l_y + l_z)(_l + _ m +_ n ) - ( l_l + l_m + l_n)(_x + _ y + _ z ) ] , and thus . because the vary during the course of an year the also varies during the year and so also the amplitude of the random variable .thus the psd of is : s_c ( f;t ) = 4 ^2 t(t)^2 f^2 s_c(f ) .[ rsdl ] this noise must be compared with the secondary noise . however , because we are considering the modified tdi eq .( [ sgnc ] ) , there are extra factors and which do not appear in the corresponding first generation tdi .these factors introduce an additional multiplicative factor , namely , in the secondary noise psd which leaves the snr unchanged but must be considered when it is compared with the residual laser frequency noise given in eq .( [ rsdl ] ) .( f ) = 4 ^2 ( 3 f l_0)\{[8 ^2 3 fl_0 + 16 ^2 f l_0 ] s_acc + 6 s_opt } where and .in the figure [ sagnac ] we plot and at three epochs an year apart .we see that , clearly the residual laser frequency noise is few orders of magnitude below the secondary noises .since the other sagnac variables and are obtained by cyclic permutations of the spacecraft , the residual laser noise is similarly suppressed in them .the basic reason for this remarkable cancellation is the symmetry inherent in the physics .note that and hence is an element of the module we are seeking .we have computed in the newtonian framework the spacecraft orbits in the combined field of the sun and earth and from this deduced the flexing of the arms of lisa by choosing the model which gave minimum flexing when only the sun s field was taken into account . nowthe flexing is no more periodic as was the case when only the sun s field was considered .we have ignored the effect of jupiter because we believe this effect to be not so dominant as that of the earth .writing the tidal parameter for jupiter , , similar to of earth , where , kg is the mass of jupiter and , the distance from lisa to jupiter , which we take on the average to be a. u. , we find .moreover , jupiter has its own periodicity pertaining to its orbit and therefore will not be in resonance as was the case with the earth , and thus there will be no secular effect .thus we do not expect the effect of jupiter on flexing to dominate .note that these results are valid so long as we can neglect the nonlinearities arising from higher order terms in and .we have computed the residual laser frequency noise in one of the important tdi variables , namely , the sagnac .the residual noise is satisfactorily suppressed because of the symmetry . in other variables such as the michelsonthis is not true and higher degree polynomials will be required .the algebraic approach outlined above seems promising .our model of lisa is optimal ( minimal flexing of arms ) only in the sun s field .clearly this opens up the question of seeking an optimal model for the lisa configuration in the field of the sun , earth , jupiter and other planets which will minimise the flexing of the arms and therefore the residual laser frequency noise in the modified first generation tdi .we finally remark that our computations here may be useful in the development of a lisa simulator .the author would like to thank the indo - french centre for the promotion of advanced research ( ifcpar ) project no .3504 - 1 under which this work has been carried out .this work is in collaboration with j - y vinet and r. nayak .p. bender _et al . _ "lisa : a cornerstone mission for the observation of gravitational waves " , system and technology study report esa - sci(2000 ) 11 , 2000 .j. w. armstrong , lrr-2006 - 1 : http://relativity.livingreviews.org/articles/lrr-2008-2 . s. v. dhurandhar , k. rajesh nayak , j - y .vinet , _ phys .rev _ , * d 65 * , 102002(2002 ) .w. h. clohessy and r. s. wiltshire , journal of aerospace sciences , 653 - 658 ( 1960 ) ; + d. a. vallado , _ foundations of astrodynamics and applications _ , 2nd edition 2001 , microcosm press kluwer ; + also in s. nerem , http://ccar.colorado.edu/asen5050/lecture12.pdf(2003 ) .a. petiteau , g. auger , h. halloin , o. jeannin , e. pagnol , s. pireaux , t. regimbau and j - y .vinet , _ phys ._ d * 77 * , 023002 ( 2008 ) .s. v. dhurandhar , k. r. nayak , s. koshti and j - y .vinet , _ class .quantum grav ._ , * 22 * , 481 ( 2005 ) ; r. nayak , s. koshti , s. v. dhurandhar and j - y .vinet , _ class .quantum grav ._ , * 22 * , 1763 ( 2006 ) . k. r. nayak and j - y vinet , _ phys_ d * 70 * , 102003 ( 2004 ) .s. v. dhurandhar , j - y .vinet and k. r. nayak , submitted to cqg , gr - qc 0805.4314 , ( 2008 ) .m. vallisneri , _ phys ._ d * 72 * , 04003 ( 2005 ) .
lisa is a joint space mission of the esa and nasa for detecting low frequency gravitational radiation in the band hz . in order to attain the requisite sensitivity for lisa , the laser frequency noise must be suppressed below the other secondary noises such as the optical path noise , acceleration noise etc . this is achieved because of the redundancy in the data , more specifically , by combining six appropriately time - delayed data streams containing fractional doppler shifts - time delay interferometry ( tdi ) . the orbits of the spacecraft are computed in the gravitational field of the sun and earth in the newtonian framework , while the optical links are treated fully general relativistically and thus , effects such as the sagnac , shapiro delay , etc . are automatically incorporated . we show that in the model of lisa that we consider here , there are symmetries inherent in the physics , which may be used effectively to suppress the residual laser frequency noise and simplify the algebraic approach to tdi .
infectious processes spread through specific contact ties between individuals . for any fixed time , these transmission routes constitute a network . for a deleterious infection ,a treatment aims to reduce the probability of transmission from one node to another .for example , in the context of hiv , one strategy to reduce the overall rate of transmission is to test and treat all infected individuals , which is the goal of several ongoing clinical trials .other spreading processes propagate through specific ties between individuals , such as diffusive processes of ideas or practices , and can also be modeled as an infectious process . to determineif a treatment is effective , one might randomly assign some individuals to receive the treatment , and others to receive standard care or a placebo , which randomization ensures balance on average in the predictors between the two arms .this type of experiment is called a _ randomized controlled trial _ ( rct ) . when this is not possible, a population of individuals may be available for observation in which two or more groups have different exposures related to the outcome , which can be made to resemble an rct if all confounders are correctly adjusted for .in addition , _ interference _ occurs when an individual s outcome depends on the outcomes of other individuals in addition to exposure .this can be avoided by assuming that clusters of individuals are independent , and adjusting for the correlation of outcomes within clusters as well as the probability of exposure .dependent outcomes within clusters generally results in a decrease in statistical _ power _ , or the probability of detecting an exposure effect if one exists , and this decrease can depend on epidemic dynamics as well as within - cluster network structure . generalized estimating equations ( gees) are a semi - parametric approach for estimating a treatment effect in randomized trials when outcome data are correlated within clusters .this approach produces unbiased estimates of the average marginal treatment effect across all clusters , whereas mixed effect models provide an estimated intervention effect conditional on adjustment covariates . accounting for the correlation between baseline covariates and the outcome can improve the efficiency of treatment effect estimates , using either the augmented gee or other semiparametric approaches , such as targeted maximum likelihood estimation ( tmle) . when the exposure is not randomized , estimating the effect of exposure without bias requires that the propensity that each cluster is exposed to treatment be correctly estimated .few studies of infectious processes so far use contact network information to improve statistical efficiency using these approaches .this paper investigates the degree to which exposure effect estimates in simulated and empirical observational studies of epidemic processes are improved by adjusting for features of the contact networks and infections at baseline .section 2 gives details on the network generation framework , infectious process , and the estimation procedures used to evaluate the effect of exposure while adjusting for network features .section 3 details a series of simulations of observational studies using this framework .section 4 applies this approach to an empirical dataset , treating the spread of a novel microfinance program as an infectious process in a collection of villages in karnataka , india .section 5 provides concluding remarks .a simple network consists of set of nodes and edges . the edges may be described in an _ adjacency matrix _ where element has value 1 if an edge exists between nodes and , and a otherwise .one specific network model , the stochastic blockmodel , assumes that each node belongs to only one block in a partition of nodes .the complete set of node memberships may be represented compactly as a vector in this model , the probability of there being an edge between two nodes and depends only on their block membership karrer and newman extend this model to the _ degree - corrected stochastic blockmodel _ , in which each node has arbitrary expected degree , where is the observed _ degree _ for node .the likelihood of this model assumes that the mean number of edges between any two nodes and is the product of the expected degree of nodes and ( and , respectively ) , multiplied by the expected amount of mixing between the blocks to which and belong .the full likelihood of this model is : equation [ degsbmlik ] assumes is poisson distributed ( allowing for multiple edges between pairs of nodes ) , which converges to a simple bernoulli network for sparse networks in the limit .the second half of the likelihood contains factors of one half to account for the fact that _ self - edges _ ( edges from one node to itself )are counted twice by this indexing .in addition to block structure and node degree , networks may vary in the extent to which degrees of adjacent nodes are correlated .one metric for summarizing this property is _ degree assortativity _ , which is the pearson correlation coefficient of degrees of adjacent nodes taken over all network edges .degree assortativity can be varied by performing _degree assortative rewiring _ , which increases or decreases the assortativity in the network while preserving block structure and each node s degree .this is performed by randomly selecting two edges within a block pair and rewiring them , as described in algorithm 1 .a diagram of this process is shown in figure [ rewire ] . to decrease assortativity , the inequality in step 3must be reversed .* select two blocks and at random .* select two edges and at random between blocks and . * if : * remove edges and * add edges and .panel * b * highlights two edges selected within the same block pair .panel * c * shows a potential rewiring , which will only occur if rewiring will increase assortativity . in this case , rewiring would increase degree assortativity , and panel * d * displays the rewiring.,width=491 ] an infectious process on the collection of networks is simulated by employing a stochastic compartmentalmodel , shown in algorithm 2 . of all nodes are initially selected to be infected at random across all study networks .then , infected node in network cluster selects of their neighbors and infect them with probability , where is the node s _ infectivity _ ( described in more detail below ) .this process is repeated until of the population is infected at baseline .half of the clusters are determined to be exposed ( ) or unexposed ( ) , and the infectious process continues for time steps with a probability to infect for unexposed clusters , and for exposed clusters .* of all nodes are selected uniformly at random to be initially infected . + * until population incidence : + * * for each infected node ( in random order ) : + * * * successively select neighbors .+ * * * if neighbor is already infected , do nothing . if not , infect with probability . +* repeat times : + * for each infected node ( in random order ) : + * * successively select neighbors . + * * if neighbor is already infected , do nothing . if not , infect with probability : + * * * for those in unexposed clusters , + * * * for those in exposed clusters . in this process, infectivity is the number of individuals node may infect at a given time , which may vary between and .zhou et al. showed that the properties of network spread can depend strongly on infectivity .unit infectivity and degree infectivity occur when an individual attempts to infect either one partner ( selected at random ) or all partners , respectively .illustrative diagrams of the infectious process over time are given in the supplementary section ` s1 ` .while a network consists of pairwise connections between individuals and , properties of the network can be summarized for each node .this vector is composed of a small number of functions of the network to which the nodes belong , which can be used for the estimation of the effects of exposure , discussed in the following section .some of these are constant per network , and some are different for every individual .they fall into four broad classes : those involving degree , mesoscopic structure ( structure with a size between that of the node and entire network ) , baseline infections , and combinations of these .let represent the adjacency matrix for the network structure in cluster .one network summary for each node is its degree , mean neighbor degree is the unweighted average of a node s neighbors degrees ._ assortativity _ is a composite measure of mean neighbor degree across the entire network , defined previously .network features may also capture _ mesoscopic _ network structure , or structure existing between local and global scales of the network .for example , two blocks of nodes may be more highly connected to all other blocks the simulated networks .another mesoscopic structure is a _ connected component _ , or subset of nodes for which a path exists between each pair of nodes ._ exists between two node and if and only if there exists a subset of edges in the network that connect nodes and .the components of the network are assumed to be ordered from largest to smallest .the largest component in the network contains nodes .the mean component size is , the number of components , and the size of the component each node belongs to is .another summary feature is the infectious status of each node in the network at baseline .one simple metric is the number of infected neighbors at baseline for each node , or the number of infected individuals belonging to the same component as a given node .another metric is the length of the shortest path between each node and each infected individual at baseline .the shortest path length between nodes and is , where when no path exists between the two nodes . the shortest path length from the closest node infected at baseline is , with inverse .the sum of the inverse path lengths to node is .while these final metrics would be difficult to determine with limited knowledge about a network , we examine whether their inclusion in the analysis yields strong enough improvements to warrant the efforts to gather the required network data .additional metrics may also be defined for multilayer networks , in which several edges with distinct labels may be shared between nodes .the outcome model may incorporate any subset of covariates , for example as obtained from a stepwise selection procedure . table [feature_table ] summarizes these network features .
in some infectious processes , transmission occurs between specific ties between individuals , which ties constitute a contact network . to estimate the effect of an exposure on infectious outcomes within a collection of contact networks , the analysis must adjust for the correlation of outcomes within networks as well as the probability of exposure . this estimation process may be more statistically efficient when leveraging baseline covariates related to both the exposure and infectious outcome . we investigate the extent to which gains in statistical efficiency depend on contact network structure and properties of the infectious process . to do this , we simulate a stochastic compartmental infection on a collection of contact networks , and employ the observational augmented gee using a variety of contact network and baseline infection summaries as adjustment covariates . we apply this approach to estimate the effect of leadership and a concurrent self - help program in the spread of a novel microfinance program in a collection of villages in karnataka , india .
since its establishment in the early decades of the last century , quantum theory has been elevated to the status of the `` most precisely tested and most successful theory in the history of science '' . andyet , many of its consequences have puzzled and still do most of the physicists confronted to it . at the heart of many of the counter - intuitive features of quantum mechanicsis quantum entanglement , nowadays a crucial resource in quantum information and computation but that also plays a central role in the foundations of the theory .for instance , as shown by the celebrated bell s theorem , quantum correlations between distant parts of an entangled system can violate bell inequalities , thus precluding its explanation by any local hidden variable ( lhv ) model , the phenomenon known as quantum non - locality .given its fundamental importance and practical applications in the most varied tasks of quantum information , not surprisingly many generalizations of bell s theorem have been pursued over the years .bell s original scenario involves two distant parties that upon receiving their shares of a joint physical system can measure one out of possible dichotomic observables .natural generalizations of this simple scenario include more measurements per party and sequential measurements , more measurement outcomes , more parties and also stronger notions of quantum non - locality .all these different generalizations share the common feature that the correlations between the distant parties are assumed to be mediated by a single common source of states ( see , for instance , fig .[ fig_dags]a ) .however , as it is often in quantum networks , the correlations between the distant nodes is not given by a single source but by many independent sources which distribute entanglement in a non - trivial way across the whole network and generate strong correlations among its nodes ( figs .[ fig_dags]b - d ) .surprisingly , in spite of its clear relevance , such networked scenario is far less explored .the simplest networked scenario is provided by entanglement swapping , where two distant parties , alice and charlie , share entangled states with a central node bob ( see fig . [fig_dags]b ) . upon measuring in an entangled basis and conditioning on his outcomes, bob can generate entanglement and non - local correlations among the two other distant parties even though they had no direct interactions .to contrast classical and quantum correlation in this scenario , it is natural to consider classical models consisting of two independent hidden variables ( figs .[ fig_dags]b ) , the so - called bilocality assumption .the bilocality scenario and generalizations to networks with an increasing number of independent sources of states ( figs .[ fig_dags]d ) , the so called n - locality scenario allow for the emergence of a new kind of non - local correlations .for instance , correlations that appear classical according to usual lhv models can display non - classicality if the independence of the sources is taken into account , a result experimentally demonstrated in .however , previous works on the topic have mostly focused on developing new tools for the derivation of inequalities characterizing such scenarios and much less attention has been given to understand what are the quantum correlations that can be achieved in such networks .that is precisely the aim of the present work .we consider in details the bilocality scenario and the bilocality inequality derived in and characterize the non - bilocal behavior of general qubit quantum states when the parties perform different kinds of projective measurements .first of all we show that the correlations arising in an entanglement swapping scenario , i.e. when bob performs a bell - state measurement ( bsm ) , form a strict subclass of those correlations which can be achieved by performing separable measurements in all stations . focusing on this wider class of correlations , we derive a theorem characterizing the maximal violation of the bilocality inequality that can be achieved from a general two - qubit quantum states shared among the parties .this leads us to obtain a characterization for the violation of the bilocality inequality in relation to the violation of the chsh inequality .finally we show how our maximization method can be extended to the star network case , a -partite generalization of the bilocality scenario , deriving thus the maximum violation of the n - locality inequality that can be extracted from this network .in the following we will mostly consider the bilocality scenario , which classical description in terms of directed acyclic graphs ( dags ) is shown in fig .[ fig_dags]-b .it consists of three spatially separated parties ( alice , bob and charlie ) whose correlations are mediated by two independent sources of states . in the quantum case , bob shares two pairs of entangled particles , one with alice and another with charlie . upon receiving their particles alice , bob and charlie perform measurements labelled by the random variables , and obtaining , respectively , the measurement outcomes , and .the difference between bob and the other parties is the fact that the first has in his possession two particles and thus can perform a larger set of measurements including , in particular , measurements in an entangled basis . and .* d ) * extension of the bilocality scenario to a network consisting of _ n _ different stations sharing a quantum state with a central node , i.e. the so - called n - local star network . ] any probability distribution compatible with the bilocality assumption ( i.e. independence of the sources ) can be decomposed as in particular , if we consider that each party measures two possible dichotomic observables ( ) , it follows that any bilocal hidden variable ( blhv ) model described by eq .[ eq : bilocal_set_correlations_definition ] must fulfill the bilocality inequality with and where as shown in , if we impose the same causal structure to quantum mechanics ( e.g. in an entanglement swapping experiment ) we can nonetheless violate the bilocality inequality ( even though the data might be compatible with lhv models ) , thus showing the existence of a new form of quantum _ non - locality _ called quantum _ non - bilocality_. to that aim let us consider the entanglement swapping scenario with an overall quantum state , with .we can choose the measurements operators for the different parties in the following way .stations a and c perform single qubit measurements defined by station b , instead , performs a complete bsm , assigning to the two bits the values the binary measurement is then defined such that it returns , with respect to the value of .this leads to where , in the last steps , we made explicit use of the marginalization of probability over .+ with these state and measurements , the quantum mechanical correlations achieve a value , which violates the bilocality inequality and thus proves quantum non - bilocality .as reproduced above , in an entanglement swapping scenario qm can exhibit correlations which can not be reproduced by any blhv model . in turn, it was recently proved that an equivalent form of the bilocality inequality ( eq .[ eq : bilocality_inequality ] ) , can be violated by qm in the case where all parties only perform single qubit measurements ( i.e. and linear combinations ) . herewe will prove that , given the bilocality inequality ( eq .[ eq : bilocality_inequality ] ) , the non - bilocal correlations arising in an entanglement swapping scenario are a strict subclass of those obtainable by means of separable measurements .the core of the bilocality parameter is the evaluation of the expected value ( eq . [ eq : mean_abc_definition ] ) , that in the quantum case is given by .\ ] ] for the entanglement swapping scenario we can summarize the measurements in stations a and c by where and are general single qubit projective measurements with eigenvalues and . when dealing with station b , it is suitable to consider its operatorial definition which is implicit in eq .[ eq : mean_abc_definition_swappcase ] .indeed we can consider that is the outcome of our measurement , leading to values shown in table [ tab : b_swap_values ] .[ tab : tomo ] the quantum mechanical description of the operator ( in an entanglement swapping scenario ) is thus given by which relates each value of with its correct set of outcomes .this leads to the following theorem .[ non - bilocal correlations and separable measurements ] [ theo : bilo_sep = ent ] given the general set of separable measurements qm predictions for the bilocality parameter which arise in an entanglement swapping scenario ( where bob performs the measurement described in eq .[ eq : by_swapping_definitions ] ) are completely equivalent to those obtainable by performing a strict subclass of eq .[ eq : b_y_separable_general_form ] , i.e. let us write the bell basis of a two qubit hilbert space in terms of the computational basis ( ) . from eq .[ eq : by_swapping_definitions ] , we obtain this shows that the entanglement swapping scenario is equivalent to the one where station only performs the two separable measurements and , which form a strict subclass of the general set of separable measurements given by eq .[ eq : b_y_separable_general_form ]. moreover if we consider a rotated bell basis , then we obtain where and ( ) are orthogonal unitary vectors . due to the constraints and , this case still represents a strict subset of eq .[ eq : b_y_separable_general_form ] . as it turns out , this theorem has strong implications in our understanding of the non - bilocal behavior of qm .indeed , it shows how the entanglement swapping scenario is not capable of exploring the whole set of quantum non - bilocal correlations , since it is totally equivalent to a subclass of bob s separable measurements . as we will show next, a better characterization of quantum correlations within the bilocality context must thus in principle take into account the general form of bob s separable measurements , especially when dealing with different types of quantum states .we will now explore the maximization of the bilocality inequality considering that bob performs the separable measurements described by eq .[ eq : b_y_separable_general_form ] .it is convenient to consider that station b as a unique station composed of the two substations and , which perform single qubit measurements on one of the qubits belonging to the entangled state shared , respectively , with station a or c ( see fig . [ fig_dags]-c ) .+ let perform a general single qubit measurement and similarly for , and .we can define these measurements as where .let us now define a general 2-qubit quantum state density matrix as the coefficients can be used to define a real matrix that lead to the following result : [ theo : b_separable_measurements ] given the set of general separable measurements described in eq .[ eq : general_separable_measurements ] and defined the general quantum state accordingly to eq .[ eq : general_2qubit_state ] , the bilocality parameter is given by let us consider two operators in the form and a two qubit quantum state described by eq .[ eq : general_2qubit_state ] .we can write =\operatorname{tr}[\displaystyle \sum_{j , k=1,2,3}(v_1^j v_2^k \sigma_j \otimes \sigma_k){\varrho}]= \displaystyle \sum_{j , k=1,2,3 } v_1^j v_2^k t_{jk}=\vec{v}_1\cdot(t_{{\varrho}}\vec{v}_2 ) , \end{array } } \ ] ] where we made use of the properties of the pauli matrices .given the set of separable measurements described in eq .[ eq : general_separable_measurements ] , and the definitions of and ( showed in eq .[ eq : ij_definition ] ) , the proof comes from a direct application of eq .[ eq : operators_and_vectors_proof ] to the quantum mechanical expectation value : next we proceed with the maximization of the parameter over all possible measurement choices , that is , the maximum violation of bilocality we can achieve with a given set of quantum states . to that aim , we introduce the following lemma .[ lemma : mmt_mtm ] given a square matrix and defined the two symmetric matrices and , each _ non - null _ eigenvalue of is also an eigenvalue of , and _vice versa_. let be an eigenvalue of if we must have .we can then apply the operator from the left , obtaining which shows that is an eigenvector of with eigenvalue .+ the _ opposite _ statement can be analogously proved .we can now enunciate the main result of this section .[ theo : b_maximization_separable ] given the set of general separable measurements described in eq .[ eq : general_separable_measurements ] , the maximum bilocality parameter that can be extracted from a quantum state can be written as where and ( and ) are the two greater ( and positive ) eigenvalues of the matrix ( ) , with and .we will prove theorem [ theo : b_maximization_separable ] , following a scheme similar to the one used by horodecki for the chsh inequality .let us introduce the two pairs of mutually orthogonal vectors and let us apply eq .[ eq : orthogonal_vectors_changes ] to eq .[ eq : b_separable_measurements ] where the maximization is done over the variables and .we can choose and so that they maximize the scalar product .defining and remembering that and are unitary vectors , we obtain next we have to choose the optimum variables variables and .this leads to the set of equations this system of equations admits only solutions constrained by leading to next , we must take into account the constraints and .since these two couples of vectors are , however , independent , we can proceed with a first maximization which deals only with the two set of variables and . since is a symmetric matrix, it is diagonalizable .let us call and its eigenvalues and let us write and in an eigenvector basis .if we define and , our problem can be written in terms of lagrange multipliers related to the maximization of a function , given the constraints where we considered that finding the values that maximize is equivalent to find these values for .let us now introduce the scaled vectors and .we obtain whose solution is given by vectors with two null components , out of three . if we define and if , the solution related to the maximal value is then given by which leads to where we made use of the lemma [ lemma : mmt_mtm ] .+ the maximization over the last two variables leads to an analogous lagrange multipliers problem with similar solutions , thus proving the theorem .this theorem generalizes the results of ( which dealt with some particular classes of quantum states in the entanglement swapping scenario ) to the more generic case of any quantum state in the separable measurements scenario ( which , in a bilocality context , includes the correlations obtained through entanglement swapping ) .it represents an extension of the horodecki criterion to the bilocality scenario , taking into account the general class of separable measurements which can be performed in station b. our result thus shows that as far as we are concerned with the optimal violations of the bilocality inequality provided by given quantum states , separable measurements or a bsm ( in the right basis ) are fully equivalent .we will now characterize quantum non - bilocal behaviour with respect to the usual non - locality of the states shared between a , b and b , c. let us start from eq .[ eq : max_bilocality ] and separately consider bell non - locality of the states and .we can quantify it by evaluating the greatest chsh inequality violation that can be obtained with these states .let us define the chsh inequality as if we apply the criterion by horodecki _ , we obtain where we defined and accordingly to eq .[ eq : max_bilocality ] . from a direct comparison of [ eq : max_bilocality ] and [ eq : max_chsh ]we can write [ prop : loc_means_biloc ] applying the cauchy - schwarz inequality we obtain .the blue sets represent quantum states that do not violation the chsh inequality for ( ab local ) or ( bc local ) .the orange set includes , instead , these states whose correlations do not violate the bilocality inequality , while the whole set of quantum correlations is represented in green .for all different regions a blue square shows those decompositions which are not allowed ( crossed with red lines ) , accordingly to the greater square on the right . ]this result shows that if the two sources can not violate the chsh inequality then they will also not violate the bilocality inequality .thus , in this sense , if our interest is to check the non - classical behaviour of sources of states , it is just enough to check for chsh violations ( at least if bob performs a bsm or separable measurements ) .notwithstanding , we highlight that this does not mean that the bilocality inequality is useless , since there are probability distributions that violate the bilocality inequality but nonetheless are local according to a lhv model and thus can not violate any usual bell inequality .next we consider the reverse case : is it possible to have quantum states that can violate the chsh inequality but can not violate the bilocality inequality ? that turns out to be the case . to illustrate this phenomenon, we start considering two werner states in the form . in this case , indeed , in order to have a non - local behaviour between a and b ( b and c ) we must have ( ) while it is sufficient to have in order to witness non - bilocality .this example shows that on one hand it might be impossible to violate the bilocality inequality although one of or is bell non - local ( for instance and ) .it also shows that , when one witnesses non - locality for only one of the two states , it can be possible , at the same time , to have non - bilocality by considering the entire network ( for instance and ) .another possibility is the one described by the following proposition given a tripartite scenario we will prove this point with an example .let us take where we defined as .\end{array}\ ] ] for these two quantum states one can check that which leads to this shows how it is possible to have non - local quantum states which nonetheless can not violate the bilocality inequality ( with separable measurements ) .all these statements provide a well - defined picture of the relation between the chsh inequality and the bilocality inequality in respect to the quantum states .we indeed derived all the possible cases of quantum non - local correlations which may be seen between couples of nodes , or in the whole network ( according to the chsh and bilocality inequalities ) .this characterization is shown in fig .[ fig_exp2_new_loc - biloc ] , in terms of a venn diagram .we finally notice that if a and b share a maximally entangled state while b and c share a generic quantum state , then it is easier to obtain a bilocality violation in the tripartite network rather than a chsh violation between the nodes and .indeed it is possible to derive where we made use of the following lemma [ lemma : t_less_1 ] given the parameters and defined in eq .[ eq : max_bilocality ] , it holds this proof will be divided in two main points .+ + * 1 ) * .+ _ as discussed in , if we apply a local unitary to the initial quantum state , the matrix will transform accordingly to according to the singular decomposition theorem , it is always possible to choose and such that is diagonal , thus demonstrating point 1 . _ + + it is important to stress that we can always rotate our hilbert space in a way that so we can take without loss of generality . + + * 2 ) * _ if is diagonal , then the eigenvalues of are less or equal to 1 .+ it was shown in that , for every quantum state , we have regardless to the basis chosen for our hilbert space .if is diagonal then and its eigenvalues can be written as . _ + + given the definitions of and ( and ) described in eq .[ eq : max_bilocality ] , the lemma is proved .we now generalize the results of theorem [ theo : b_maximization_separable ] , to the case of a _n - partite _ star network .this network is the natural extension of the bilocality scenario , and it is composed of sources sharing a quantum state between one of the stations and a central node b ( see fig . [fig_dags]-d ) .the bilocality scenario corresponds to the particular case where .the classical description of correlations in this scenario is characterized by the probability decomposition as shown in , assuming binary inputs and outputs in all the stations , the following n - locality inequality holds where we will now derive a theorem showing the maximal value of parameter that can be obtained by separable measurements on the central node and given arbitrary bipartite states shared between the central node and the parties . [theo : star_netw_maximization ] given single qubit projective measurements and defined the generic quantum state accordingly to eq .[ eq : general_2qubit_state ] , the maximal value of is given by where and are the two greater ( and positive ) eigenvalues of the matrix with . in our single qubit measurements scheme the operator can be written as as pointed out in , this allows us to write which leads to introducing the pairs of mutually orthogonal vectors allows us to write we can choose the parameters so that they maximize the scalar products .we obtain we can now proceed to the maximization over the parameters . let us define the function we can write which , similarly to eq . [ eq : colored_white_imperfect_e_partial_derivatives_2values ] , admits only solutions constrained by this leads to which allows us to write let us now define we have that labeling and as the eigenvalues of ( which is real and symmetric ) and writing and in an eigenvector basis we obtain the lagrange multipliers problem related to the maximization of a function , given the constraints : where we considered that the values which maximize also maximize .this lagrangian multipliers problem can be treated similarly to eq .[ eq : lagrange_problem_bilo ] , giving the same results .if , we obtain which leads to the proof is concluded by applying iteratively this procedure .we notice that the bilocality scenario can be seen as a particular case ( ) of a star network , where and .moreover we emphasize that eq .[ eq : max_star_network ] gives the same results that would be obtained if one performed an optimized chsh test on a 2-qubit state were and are given by the geometric means of the parameters and .generalizations of bell s theorem to complex networks offer a new theoretical and experimental ground for further understanding quantum correlations and its practical applications in information processing . similarly to usual bell scenarios , understanding the set of quantum correlations we can achieve and in particular what are the optimal quantum violation of bell inequalities is of primal importance . in this workwe have taken a step forward in this direction , deriving the optimal violation of the bilocality inequality proposed in and generalized in for the case of a star - shaped network with independent sources .considering that the central node in the network performs arbitrary projective separable measurements and that the other parties perform projective measurements we have obtained the optimal value for the violation of the bilocality and n - locality inequalities .our results can be understood as the generalization for complex networks of the horodecki s criterion valid for the chsh inequality .we have analyzed in details the relation between the bilocality inequality and in particular showed that if both the quantum states can not violate the chsh inequality then the bilocality inequality also can not be violated , thus precluding , in this sense , its use as a way to detect quantum correlations beyond the chsh case .moreover , we have showed that some quantum states can separately exhibit bell non - local correlations , but nevertheless can not violate the bilocality inequality when considered as a whole in the network , thus proving that not all non - local states can be used to witness non - bilocal correlations ( at least according to this specific inequality ) . however , all these conclusions are based on the assumption that the central node in the network performs separable measurements ( that in such scenario include measurements in the bell basis as a particular case ) .this immediately opens a series of interesting questions for future research .can we achieve better violations by employing more general measurements in the central station , for instance , entangled measurements in different basis , non - maximally entangled or non - projective ?related to that , it would be highly relevant to derive new classes of network inequalities .one of the goals of generalizing bell s theorem for complex networks is exactly the idea that since the corresponding classical models are more restrictive , it is reasonable to expect that we can find new bell inequalities allowing us to probe the non - classical character of correlations that are local according to usual lhv models .can it be that separable measurements or measurement in the bell basis allow us to detect such kind of correlations if new bilocality or n - locality inequalities are considered ? and what would happen if we considered general povm measurements in all our stations ?could we witness a whole new regime of quantum states , which at the moment , instead , admit a n - local classical description ? finally , one can wonder whether quantum states of higher dimensions ( qudits ) would allow for higher violations of the n - locality inequalities .+ _ note added : _ during the preparation of this manuscript which contains results of a master thesis , we became aware of an independent work preprinted in february 2017 .this work was supported by the erc - starting grant 3d - quest ( 3d - quantum integrated optical simulation ; grant agreement no .307783 ) : http://www.3dquest.eu and brazilian ministries mec and mctic .gc is supported by becas chile and conicyt .41ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] * * , ( ) * * , ( ) `` , '' ( ) * * , ( ) link:\doibase 10.1103/revmodphys.86.419 [ * * , ( ) ] http://stacks.iop.org/0305-4470/37/i=5/a=021 [ * * , ( ) ] * * , ( ) link:\doibase 10.1103/physrevlett.88.040404 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.65.1838 [ * * , ( ) ] link:\doibase 10.1103/physreva.64.032112 [ * * , ( ) ] link:\doibase 10.1103/physrevd.35.3066 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.109.070401 [ * * , ( ) ] link:\doibase 10.1103/physreva.88.014102 [ * * , ( ) ] ( ) http://www.nature.com/nature/journal/v453/n7198/full/nature07127.html [ * * , ( ) ] link:\doibase 10.1103/physrevlett.71.4287 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.104.170401 [ * * , ( ) ] , link:\doibase 10.1103/physreva.85.032119 [ * * , ( ) ] link:\doibase 10.1103/physreva.90.062109 [ * * , ( ) ] * * , ( ) link:\doibase 10.1103/physrevlett.116.010402 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.116.010403 [ * * , ( ) ] * * , ( ) link:\doibase 10.1103/physreva.93.030101 [ * * , ( ) ] ( ) ( ) ( ) ( ) link:\doibase 10.1103/physrevlett.74.2619 [ * * , ( ) ] * * ,( ) http://stacks.iop.org/1367-2630/14/i=10/a=103001 [ * * , ( ) ] ( ) http://www.nature.com/nature/journal/v526/n7575/abs/nature15759.html [ * * , ( ) ] link:\doibase 10.1103/physrevlett.115.250401 [ * * , ( ) ]link:\doibase 10.1103/physrevlett.115.250402 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.105.250404 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.114.140403 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.118.060401 [ * * , ( ) ] http://thebigbelltest.org [ `` '' ] ( ) _ _ ( , ) link:\doibase 10.1103/physrevlett.23.880 [ * * , ( ) ] , & . _ _ * * , ( ) , & ._ _ * * , , ( ) ._ _ * * , , ( ) `` '' , ( ) .
bell s theorem was a cornerstone for our understanding of quantum theory , and the establishment of bell non - locality played a crucial role in the development of quantum information . recently , its extension to complex networks has been attracting a growing attention , but a deep characterization of quantum behaviour is still missing for this novel context . in this work we analyze quantum correlations arising in the bilocality scenario , that is a tripartite quantum network where the correlations between the parties are mediated by two independent sources of states . first , we prove that non - bilocal correlations witnessed through a bell - state measurement in the central node of the network form a subset of those obtainable by means of a separable measurement . this leads us to derive the maximal violation of the bilocality inequality that can be achieved by arbitrary two - qubit quantum states and arbitrary projective separable measurements . we then analyze in details the relation between the violation of the bilocality inequality and the chsh inequality . finally , we show how our method can be extended to -locality scenario consisting of two - qubit quantum states distributed among nodes of a star - shaped network .
dynamical processes in networks , such as synchronization , have been attracting much interest .a striking characteristic of many networks is that they are often formed from very simple units ( _ e.g. _ a neuron either spikes or is silent , at a certain level of description ) but can collectively exhibit a wide range of dynamics .a central question is then how dynamically simple units can produce rich collective dynamical behavior when they are coupled together in a network . in this letterwe offer a solution to this question in the context of synchronization of coupled map networks .previous work on synchronization of coupled maps focused on diffusive coupling with non - negative weights .however , in diffusively - coupled networks the synchronized network shows the same dynamical behavior as one single isolated unit ; thus , no new collective behavior is emerging here .new collective behavior could , for instance , be produced by time delays , which may remarkably make it easier for networks to synchronize . here however , instead of time delays , we consider non - diffusive coupling schemes .one particular non - diffusive coupling scheme , the so - called _ direct coupling scheme _ is motivated by biological findings ( see and the references therein ) and has been used in studies of amplitude response of coupled oscillators , although not investigated as extensively as diffusive coupling in synchronization research . in this letter , we use a direct coupling scheme to study the emergence of new collective dynamical behavior .in particular , we show the emergence of synchronized chaotic behavior in a network of non - chaotic units . to our knowledgethis is the first time that such a phenomenon is observed and analyzed in depth in mathematical network models .in contrast , synchronized chaotic behavior in a network of chaotic units and non - synchronized chaotic behavior in a network of non - chaotic units are well established phenomena .a further feature of this work is that we take the succeeding , typically in the literature neglected , facts into account .many biological networks share the following two properties : the connection structure is , in general , not symmetric . influence of neighboring units can be excitatory or inhibitory , which is modelled by positive and negative weights .it is thus essential to incorporate these characteristics in network models in order to understand the dynamical behavior of biological networks .consequently , we consider networks with arbitrary network topologies , namely , not necessarily symmetrically coupled networks with possibly both positive and negative weights . on the other hand , we restrict ourselves to networks of identical units .we mention , e.g. , , as a recent study of diffusively - coupled units with small parametric variations . in order to emphasize a general aspect , we consider in the next section networks with pairwise coupling and present a general synchronization criterion .later on , we will focus on directly coupled networks and study the emergence of new behavior .in our coupled map network model , each node is a dynamical system whose evolution is described in discrete time by iterations of a scalar map , _ i.e. _ by an equation of the form the interconnections are specified by a weighted , directed graph on vertices .the weight of the connection from vertex to vertex can be positive , negative or zero .we assume that the network has no self - loops , that is , for all .the in - degree of vertex is .the activity at vertex or unit at time is given by : where and are differentiable functions with bounded derivatives , and is the overall coupling strength .the function describes the dynamical behavior of the individual units whereas characterizes the interactions between different pairs of units .we are interested in synchronized solutions of eq .( [ pairwise ] ) , where the activity of all units is identical , that is , for all and .it follows from eq .( [ pairwise ] ) that a synchronized solution satisfies this equation already shows that the synchronized solution can be quite different from the dynamical behavior of an isolated unit described by .by contrast , in diffusive - type coupling , _ for all , the interaction vanishes when the network is synchronized ; therefore , the synchronized solution is identical to the behavior of the individual units , and no new dynamics can emerge from synchronization . before we explore different examples of new collective behavior, we investigate the robustness of the synchronized state against perturbations .the network is said to ( locally ) synchronize if for all starting from initial conditions in some appropriate open set .the propensity of the network to synchronize depends on the properties of the functions and and the underlying network structure .the latter can be encoded in terms of the eigenvalues of the graph laplacian for directed weighted graphs , defined as we label the eigenvalues of as . since we assume that the in - degrees are non - zero , we may write where is the ( ) identity matrix , is the diagonal matrix of vertex in - degrees and is the weighted adjacency matrix of the underlying graph .zero is always an eigenvalue of ; we denote it , and is the corresponding eigenvector .since all components of are identical , perturbations along the -direction again yield a synchronous solution . to study the remaining directions ,we define the _ k - th mixed transverse exponent _ for as : where and denotes the ith partial derivative of and is chosen such that for all .if no such exists we set . note that these exponents are evaluated along the synchronous solution ( [ 9 ] ) .they combine the dynamical behavior of the individual units and the interaction function with the network topology .the maximal mixed transverse exponent governs the synchronizability of the network , that is , system ( [ pairwise ] ) locally synchronizes if this result is rigorously derived in our companion paper .in the sequel , we restrict ourselves to functions . when , pairwise coupling reduces to direct coupling , i.e. , and the mixed transverse exponent ( [ chi_k ] ) reduces to by rearranging terms on the right hand side in ( [ 1 ] ) as this becomes formally equivalent to a system of the form with for all , i.e. , a diffusively coupled map network .thus , the conditions for synchronization of directly coupled networks ( [ chidirect ] ) can be deduced from the diffusive coupling case ( [ diffusive ] ) of . however , the formal equivalence obscures the roles of the system parameters and the particular coupling functions , which are important in applications .for instance , in neuronal networks , gap junctions at electrical synapses provide connections of diffusive type , whereas chemical synapses provide connections with direct coupling .the distinction is crucial for understanding the effects of different types of synapses .as already mentioned , diffusively - coupled networks have been widely studied . for the remainder of this work , we restrict ourselves to direct coupling .the definition of intertwines the effects of the resulting synchronized dynamics and the network topology .however , if is a multiple of , _i.e. _ for some constant , then these effects can be separated as the synchronization condition ( [ 5 ] ) takes the form where the lyapunov exponent of . here is chosen such that for all . in the sequel ,let denote the disk in the complex plane centered at having radius .it is easy to see that ( [ 8 ] ) is equivalent to the condition that all eigenvalues , except , are contained in , where if , for example , the synchronized solution is chaotic ( _ i.e. _ has a positive lyapunov exponent ) , then the first term in ( [ 8 ] ) has to be sufficiently negative to compensate the positive lyapunov exponent in order to ensure that the system ( [ 1 ] ) locally synchronizes .this in turn requires that the eigenvalues for be bounded away from zero , and the coupling strength lie in an appropriate interval .before turning to biologically motivated functions and , we demonstrate the emergence of synchronized chaotic behavior for the case of the tent map where an analytical treatment is possible .the tent map is given by for $ ] .its lyapunov exponent is ; thus , it is chaotic for . let and with and choose the coupling constant . by choosing different values for the target value can generate different synchronized dynamical behavior whose lyapunov exponent equals . since the absolute value of the derivative of is constant , from ( [ 8 ] ) we have that the system ( [ 1 ] ) locally synchronizes if for .for example , for and , the synchronized dynamics is chaotic although the individual units are not .furthermore , in this case condition ( [ 6 ] ) is satisfied if all eigenvalues of the graph laplacian , except , are contained in .we now apply the foregoing ideas to models of neuronal networks .we point out that the equations usually considered in neural network theory , can be derived from ( [ 1 ] ) with and .thus the dynamics of are determined by the dynamics of and hence our results also apply to networks given in the form ( [ y ] ) .a neuronal network consists of neurons linked by synaptic connections , which are directed and weighted . for an excitatory synapsethe weight is positive , and the presynaptic neuron increases the activity of the postsynaptic neuron according to its weight , whereas for an inhibitory one the weight is negative , and the postsynaptic activity is decreased . in this model , the individual dynamicsis governed by ( [ map ] ) with , where represents dissipation and is a bias term , which could also include _e.g. _ an external input . the interactions between the neurons are modeled by the sigmoidal function , where with .the resulting synchronized solution satisfies this is a generalization of the dynamics considered for in . in fig .[ fig.2 ] the lyapunov exponent of eq .( [ 4 ] ) is plotted for a set of parameter values .although the dynamical behavior of the individual units is very simple ( there is a globally attracting fixed point ) , the collective behavior can be non - trivial and even chaotic .note that dynamical behavior can be controlled by varying the coupling coefficient .we now fix so that the synchronized behavior is chaotic . in fig .[ fig.6 ] the mixed transverse exponent is plotted as a function of , which is here taken to be real for simplicity of graphical depiction .the figure shows that the network locally synchronizes if approximately since in that case for all .we illustrate the dynamics in an all - to - all coupled network of leaky neurons .the eigenvalues fall into the range given by ( [ 7 ] ) when . to see this , recall that the laplacian of an all - to - all coupled network on units has one eigenvalue equal to zero and all other eigenvalues equal to hence , globally coupled networks having more than four vertices should synchronize to a common trajectory , which , according to fig .[ fig.2 ] , is chaotic , whereas smaller networks do not synchronize .this is confirmed by the simulation results of fig .[ fig.7 ] . as a second model of a neuronal networkwe consider a sigmoidal neuron dynamics with . in this casethe neuron behaves like one with bias term .we take the interactions between the neurons to be also given by a sigmoidal function .the resulting synchronized dynamics satisfies for the special case , the dynamics of eq .( [ 3 ] ) has been analytically shown to be chaotic if . herewe consider a whole range of -values .the bifurcation diagram of eq .( [ 3 ] ) is plotted in fig .[ fig.3 ] , for a set of parameter values .it is seen that the dynamics has a complicated dependence on , and there are many regions of chaotic behavior interspersed with periodic windows . in fig .[ fig.5 ] the mixed transverse exponent is plotted as a function of the complex eigenvalue , where the blue color shows regions of synchronization .figure [ fig.9 ] shows the onset of synchronization to chaos in a random directed network of 100 sigmoidal neurons , where the probability of a directed link from a vertex to another is taken to be 0.25 for a positive link and 0.01 for a negative link . as in the leaky neuron model , monotonic individual dynamicsis replaced by collective chaotic behavior , this time in a random directed network having both excitatory and inhibitory links . by adjusting the global coupling strength , one can observe a wide variety of synchronized dynamical behavior .besides the emergence of chaos in networks of simple units , our theory can also be used to show the possibility of simple synchronous dynamics in a network of chaotic units . in other words ,chaos is replaced by simpler behavior in the network .the field of chaos control is extensive and includes several well - established methods ; for an overview see and the references therein . in our case, the network achieves chaos suppression through synchronization of its units . as an example we study chaos suppression in a network of coupled chaotic logistic maps .it is well - known that the logistic map \mbox { and } x\in [ 0,1],\ ] ] undergoes a period doubling route to chaos as the parameter is increased from 0 to 4 . in the sequel we will consider two different values for the parameter . for logistic map possesses an attracting fixed point and the lyapunov exponent is given by .thus is dynamically simple . on the other hand for logistic map is maximally chaotic with a lyapunov exponent .consider a network of chaotic logistic maps , with and . in this casethe synchronous solution is given by .so the whole synchronized network displays simple dynamical behavior , although all units in the network are chaotic .it follows from ( [ c * ] ) and ( [ r * ] ) that the network synchronizes if all eigenvalues , for , are contained in .as we have seen , the synchronous solution can be simple although all units of the network are chaotic . in this caseit is possible to state a sufficient condition for synchronization without the explicit calculation of the laplacian eigenvalues .it follows from gershgorin s disk theorem that all eigenvalues of are contained in , where , and equality holds if and only if the weights are all nonnegative or all nonpositive .note that can be much larger than 1 if there exist vertices in the graph with small in - degree ( ) , due to cancellations of positive and negative weights . ] on the other hand , by ( [ c * ] ) and ( [ r * ] ) , the system synchronizes if all eigenvalues , for , of are contained in .consequently , a sufficient condition for synchronization is given by we consider the case where . in prove that ( [ disk ] ) holds if and only if note that ( [ independent ] ) can only be satisfied if the resulting synchronous behavior is not chaotic , since the right - hand - side of ( [ independent ] ) is non - positive .hence , if the synchronized solution is not chaotic , it is possible to use the spectral bound , instead of the whole spectrum of , to give a sufficient condition for synchronization .the advantage is that from ( [ r ] ) one can immediately estimate the effect of changing the network weights without lengthy eigenvalue calculations .in diffusively - coupled networks , the whole synchronized network displays the same behavior as any single individual unit ; hence , complex behavior can not emerge through synchronization of dynamically simple units .in contrast , as we have shown in this letter , the direct - coupling scheme leads to new collective dynamical behavior when the network synchronizes .we have given an analytical condition for synchronization in terms of the spectrum of the generalized graph laplacian and the dynamical properties of the individual units and coupling functions .in particular , we have shown that synchronous chaotic behavior can emerge in networks of simple units , and conversely , chaos can be suppressed in networks of chaotic units through synchronization .these results represent a further step towards answering a fundamental question in complexity , namely , how complex collective behavior emerges in networks of simple units .the setting presented here allows for studying synchronization in general network architectures .such generality is important for applications because the connection structure of many real - world networks is unidirectional and the influence of neighboring units can be excitatory or inhibitory , as in neuronal networks .we have applied our theoretical findings to two neuronal network models , and have shown that , by changing a single parameter such as the coupling constant , the network can exhibit quite a rich range of dynamical behavior in its synchronized state .the results presented here provide insight on how new dynamical behavior may be induced in neuronal networks by changing the synaptic coupling strengths in a learning process .
we study synchronization of non - diffusively coupled map networks with arbitrary network topologies , where the connections between different units are , in general , not symmetric and can carry both positive and negative weights . we show that , in contrast to diffusively coupled networks , the synchronous behavior of a non - diffusively coupled network can be dramatically different from the behavior of its constituent units . in particular , we show that chaos can emerge as synchronized behavior although the dynamics of individual units are very simple . conversely , individually chaotic units can display simple behavior when the network synchronizes . we give a synchronization criterion that depends on the spectrum of the generalized graph laplacian , as well as the dynamical properties of the individual units and the interaction function . this general result will be applied to coupled systems of tent and logistic maps and to two models of neuronal dynamics . our approach yields an analytical understanding of how simple model neurons can produce complex collective behavior through the coordination of their actions . preprint . final version in : + epl , * 89 * ( 2010 ) 20002 + doi : 10.1029/0295 - 5075/89/20002
the ( edinburgh ) logical framework ( lf ) is a dependent type theory introduced by harper , honsell and plotkin as a framework for specifying and reasoning about formal systems .it has found many applications , including proof - carrying code .the twelf system has been used to mechanize reasoning about lf specifications .the cornerstone of lf is the idea of encoding _ judgments - as - types _ and _ proofs - as - terms _ whereby judgments of a specified formal system are represented as lf - types and the lf - terms inhabiting these lf - types correspond to valid deductions for these judgments .hence , the validity of a deduction in a specified system is equivalent to a type checking problem in lf .therefore correct use of lf to encode other logics depends on the proofs of correctness of type checking algorithms for lf .type checking in lf is decidable , but proving decidability is nontrivial because types may contain expressions with computational behavior .this means that typechecking depends on equality - tests for lf - terms and lf - types .several algorithms for such equality - tests have been proposed in the literature .harper and pfenning present a type - driven algorithm that is practical and also has been extended to a variety of richer languages .the correctness of this algorithm is proved by establishing soundness and completeness with respect to the definitional equality rules of lf .these proofs are involved : harper and pfenning s detailed pencil - and - paper proof spans more than 30 pages , yet still omits many cases and lemmas .we present a formalization of the main results of harper and pfenning s article . to our knowledgethis is the first formalization of these or comparable results .while most of the formal proofs go through as described by , we found a few do _ not _ go through as described , and there is a _gap _ in the proof of soundness .although the problem can be avoided easily by adding to or changing the rules of , we found that it was still possible to prove the original results , though the argument was nontrivial .our formalization was essential not only to find this gap in harper and pfenning s argument , but also to find and validate the possible repairs relatively quickly .we used isabelle / hol and the nominal datatype package for our formalization .the latter provides an infrastructure for reasoning conveniently about datatypes with a built - in notion of alpha - equivalence : it allows to specify such datatypes , provides appropriate recursion combinators and derives strong induction principles that have the usual variable convention already built - in .the nominal datatype package has already been used to formalize logical relation arguments similar to ( but much simpler than ) those in harper and pfenning s completeness proof ; it is worth noting that logical relations proofs are currently not easy to formalize in twelf itself , despite the recent breakthrough by . besides proving the correctness of their equivalence algorithm , harper and pfenning also sketched a proof of decidability .unfortunately , since isabelle / hol is based on classical logic , proving decidability results of this kind is not straightforward .we have formalized the essential parts of the decidability proof by providing inductive definitions of the complements of the relations we wish to decide .it is clear by inspection that these relations define recursively enumerable sets , which implies decidability , but we have not formalized this part of the proof .a complete proof of decidability would require first developing a substantial amount of computability theory within isabelle / hol , a problem of independent interest we leave for future work .we followed the arguments in harper and pfenning s article very closely using the nominal datatype package for our formalisation , but the current system does not allow us to generate executable code directly from definitions involving nominal datatypes .we therefore also implemented a type - checking algorithm based on the locally nameless approach for representing binders .we proved that the nominal datatype formalization of harper and pfenning s algorithm is equivalent to the locally nameless formulation .moreover , by making the choice of fresh names explicit , we can generate a working ml implementation directly from the verified formalization .[ [ outline ] ] outline + + + + + + + we first briefly review lf and its representation in the nominal datatype package ( sec .[ sec : background ] ) . in sec .[ sec : formalization ] , we report on our formalization .to ease comparison , sec .[ sec : formalization ] follows the structure of closely , although this article is self - contained .sections [ sec : syntactic][sec : typechecking ] summarize our formalization of the basic syntactic properties of lf and soundness and completeness of the equivalence and typechecking algorithms .we discuss additional lemmas , proof details , and other complications arising during the formalization , and discuss the gap in the soundness proof and its solutions in detail . the remainder of sec .[ sec : formalization ] reports upon formalizations of additional results whose proofs were only sketched by .these include the admissibility of strengthening and strong extensionality rules ( sec .[ sec : strengthening ] ) , a partial formalization of decidability of algorithmic typechecking for lf , and a discussion of the current limitations of isabelle / hol in formalizing proofs about decidability ( sec .[ sec : decidability ] ) , the existence and uniqueness of quasicanonical forms ( sec .[ sec : quasicanonical ] ) , and a partial formalization of an example proof of adequacy ( sec .[ sec : adequacy ] ) , and a discussion of complications in the proof sketched in . in sec .[ sec : locally - nameless ] we define and verify the correctness of a type checking algorithm based on the locally nameless representation of binders , from which isabelle / hol can generate executable code .this amounts to a verified typechecker for lf , an original contribution of this article .[ sec : discussion ] summarizes the authors experience with the formalization , sec .[ sec : related ] discusses related and future work and sec .[ sec : concl ] concludes .[ [ contributions ] ] contributions + + + + + + + + + + + + + the metatheory of lf is well - understood : it had been studied for many years before the definitive presentation in .their main results were not in serious doubt , and formalizing such work might strike some readers as perverse or pedantic .nevertheless , our formalization is an original and significant contribution to the study of logical frameworks and mechanized metatheory , because : it tests the capabilities of the nominal datatype package for formalizing a large and complex metatheoretical development , it provides high confidence in algorithms that are widely trusted but have never been mechanically verified , it elucidates a few subtle issues in the basic metatheory of lf , and it constitutes a re - usable library of formalized results about lf , providing a foundation for verification of twelf - style meta - reasoning about lf specifications , extensions to lf , or related type theories that are not as well - understood .this article is a revised and extended version of a previous conference paper presenting our initial formalization of the metatheory of lf .the formal development described by this article can be obtained by request from the authors , and is available at ` http://isabelle.in.tum.de/nominal/lf/ ` .[ sec : background ] this article assumes some familiarity with formalization in isabelle / hol and its ml - like notation for functions and definitions .we used the nominal datatype package in isabelle / hol to formalize the syntax and judgments of lf .the key features we rely upon are 1 .support for _ nominal datatypes _ with a built - in notion of binding ( i.e. -equivalence classes ) , 2 .facilities for defining functions over nominal datatypes ( such as substitution ) by _ ( nominal ) primitive recursion _ , and 3 ._ strong induction principles _ for datatypes and inductive definitions that build in barendregt - style renaming conventions .together , these features make it possible to formalize most of the definitions and proofs following their paper versions closely .we will not review the features of this system in this article , but will discuss details of the formalization only when they introduce complications .the interested reader is referred to previous work on nominal techniques and the nominal datatype package for further details .the logical framework lf is a dependently - typed lambda - calculus .we present it here following closely the article by harper and pfenning , to which we refer from now on as for brevity .the syntax of lf includes _kinds _ , _ type families _ and _ objects _ defined by the grammar : [ cols="<,>,^ , < " , ] [ [ metrics - about - the - formalization ] ] metrics about the formalization + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in table [ tab : meaningless - metrics ] , we report some simple metrics about our formalization such as the sizes , number of lines of text , and number of lemmas in each theory in the main formalization . as table [tab : meaningless - metrics ] shows , the core ` lf ` theory accounts for about 20% of the development .these syntactic properties are mostly straightforward , and their proofs merit only cursory discussion in , but some lemmas have many cases which must each be handled individually .the ` decidability ` theory accounts for another 15% ; the quasidecidability proofs are verbose but largely straightforward .the ` locallyn ` theory proves that the nominal datatypes version of lf is equivalent to a locally nameless formulation ; this accounts for about 25% of the development .the effort involved in this part was therefore quite substantial : it can be explained by the lack of automatic infrastructure for the locally nameless representation of binders in isabelle / hol , but also by the inherent subtleties when working with this representation .a number of lemmas need to be carefully stated , and in a few cases in rather non - intuitive ways .the remaining theories account for at most 510% of the formalization each ; the ` weakalgorithm ` theory defines the weak algorithmic equivalence judgment and proves the additional properties needed for the third solution , and accounts for only around 2% of the total development .the merit of metrics such as proof size or number of lemmas is debatable .we have not attempted to distinguish between meaningful lines of proof vs. blank or comment lines ; nor have we distinguished between significant and trivial lemmas .nevertheless , this information should at least convey an idea of the _ relative _ effort involved in each part of the proof .[ [ correctness - of - the - representation ] ] correctness of the representation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the facilities for defining and reasoning about languages with binding provided by the nominal datatype package are convenient , but their use may not be persuasive to readers unfamiliar with nominal logic and abstract syntax .thus , a skeptical reader might ask whether these representations , definitions and reasoning principles are really _ correct _ ; that is , whether they are equivalent to the definitions in , as formalized using some more conventional approach to binding syntax . for higher - order abstract syntax representations , this property is often called _ adequacy _ ; this term appears to have been coined in the context of lf , due to the potential problems involved in reasoning about higher - order terms modulo alpha , beta and eta - equivalence .adequacy is also important for nominal techniques and deserves further study .we believe that the techniques explored in existing work on the semantics of nominal abstract syntax and its implementation in the nominal datatype package suffices for informally judging the correctness of our formalization .there has also been some prior work on formalizing adequacy results for nominal datatypes via isomorphisms .proves a bijective correspondence between nominal datatypes and a conventional named implementation of the -calculus modulo -equivalence .have formalized isomorphisms between nominal and de bruijn representations , and they provide further citations to several other isomorphism results .our proof of equivalence to a locally nameless representation described in sec .[ sec : locally - nameless ] also gives evidence for the correctness of the nominal datatype representation . in any case , our formalization has exposed some subtle issues which make sense in the context of lf , independently of whether or not nominal datatypes in isabelle / hol really capture our informal intuitions about abstract syntax with binding . [[ reflecting - on - formalizing - lf ] ] reflecting on formalizing lf + + + + + + + + + + + + + + + + + + + + + + + + + + + + it has been observed ( as discussed , for example , by ) that the process of formalization can suggest changes that both ease formalization and clarify the original system .likewise , our formalization provides a basis for reflecting on how the lf metatheory might be adapted to make it easier to formalize .most obviously , many of the problems we encountered with soundness disappear if we simply add the omitted extensionality rule or change the equivalence algorithm .a more subtle complication we encountered was that since the algorithmic rules in do not enforce well - formedness , it is not even guaranteed that a variable appearing in one of the terms being compared also appears in the context .this necessitates extra freshness conditions on many rules and induction hypotheses to ensure that strong nominal induction principles can be used safely .building these constraints into the algorithmic rules might make several of the proofs about the equivalence algorithm cleaner .another practical consideration was that the syntax and rules of lf in exhibit redundancy , which leads to additional ( albeit straightforward ) formalization effort .for example , constants , dependent products , and applications each appear at more than one level of the syntax , resulting in proofs with redundant cases .similarly , because objects , kinds and types are defined by mutual recursion , each inductive proof about syntax needs to have three inductive hypotheses and ten cases .likewise , any proof concerning the definitional judgments needs to state eight simultaneous induction hypotheses and thirty - five cases .collapsing the three levels of lf syntax into one level , and collapsing the many definitional judgments into a smaller number could make the formalization much less verbose , as in pure type systems , at the cost of increasing the distance between the paper version and the formalization . on the other hand , such an approachcould also make it easier to generalize proofs about lf to richer type theories .[ sec : related ] s lego formalization of pure type systems is probably the most extensive formalization of a dependent type theory in a theorem prover . their formalization introduced the locally nameless variant of de bruijn s name - free approach and considered primarily syntactic properties of pure type systems with -equivalence , including a proof of strengthening . subsequently verified the partial correctness of typechecking algorithms for certain classes of pure type systems including lf . completely formalizing metatheoretic and syntactic proofs about languages and logics withname - binding has been a long - standing open problem in computational logic .we will not give a detailed survey of all of these techniques here , but mention a few recent developments .in the last five years , catalyzed by the poplmark challenge , there has been renewed interest in this area .have developed a methodology for formalizing metatheory in coq using the locally nameless representation to manage binding , and using cofinite quantification to handle fresh names .chlipala s _ parametric higher - order abstract syntax _ is another recently developed technique for reasoning about abstract syntax in coq , and has been applied to good effect in reasoning about compiler transformations .are developing cinic , a variant of coq that provides built - in support for nominal abstract syntax ( generalizing a simple nominal type theory developed by ) .have developed abella , a proof assistant for reasoning about higher - order abstract syntax , inductive definitions , and generic quantification ( similar to nominal logic s fresh - name quantifier ) .have recently discovered techniques for performing logical relations proofs in twelf .formalizing the results in this article using these or other emerging tools would provide a useful comparison of these approaches , particularly concerning decidability proofs , which ought to be easier in constructive logics .algorithms for equivalence and canonicalization for dependent type theories have been studied by several authors .prior work on equivalence checking for lf has focused on first checking well - formedness with respect to simple types , then - or -normalizing ; these approaches are discussed in detail by .algorithm is similar to harper and pfenning s but operates on untyped terms .approach involves first type - directed -expansion and then -normalization , and relies on standard properties such as the church - rosser theorem , strong normalization of -reduction and strengthening .extends this proof technique to show termination of coquand s and harper and pfenning s algorithms , and gives a terminating type - directed algorithm for checking -equivalence in system f. it may be interesting to formalize these algorithms and proofs and compare with harper and pfenning s proof . our formalization provides a foundation for several possible future investigations .we are interested in extending our formalization to include verifying twelf - style meta - reasoning about lf specifications , following harper and licata s detailed informal development of canonical lf .doing so could make it possible to extract isabelle / hol theorems from twelf proofs , but as discussed earlier , formalizing canonical lf , hereditary substitutions , and the rest of harper and licata s work appears to be a substantial challenge. it would also be interesting to extend our formalization to accommodate extensions to lf involving ( ordered ) linear logic , concurrency , proof - irrelevance , or singleton kinds , as discussed by .we hope that anyone who proposes an extension to lf will be able to use our formalization as a starting point for verifying its metatheory .[ sec : concl ] lf is an extremely convenient tool for defining logics and other calculi involving binding syntax .it has many compelling applications and underlies the system twelf , which has a proven record in formalizing many programming language calculi .hence , it is of intrinsic interest to verify key properties of lf s metatheory , such as the correctness and decidability of the typechecking algorithms .we have done so , using the nominal datatype package for isabelle / hol .the infrastructure provided by this package allowed us to follow the proof of harper and pfenning closely .for our formalization we had the advantage of working from harper and pfenning s carefully - written informal proof , which withstood rigorous mechanical formalization rather well .still we found in this informal proof one gap and numerous minor complications .we have shown that they can be repaired .we have also partially verified the decidability of the equivalence and typechecking algorithms , although some work remains to formally prove decidability per se .formalizing decidability proofs of any kind in isabelle / hol appears to be an open problem , so we leave this for future work .while verifying correctness of proofs is a central motivation for doing formalizations , it is not the only one .there is a second important benefit they can be used to experiment with changes to the system rapidly . by replaying a modified formalization in a theorem proverone can immediately focus on places where the proof fails and attempt to repair them rather than re - checking the many cases that are unchanged .this capability was essential in fixing the soundness proof , and it illustrates one of the distinctive advantages of performing such a formalization . had we attempted to repair the gap using only the paper proof , experimenting with different solutions would have required manually re - checking the roughly 31 pages of paper proofs for each change .our formalization is not an end in itself but also provides a foundation for further study in several directions .researchers developing extensions to lf may find our formalization useful as a starting point for verifying the metatheory of such extensions .we plan to further investigate hereditary substitutions and adequacy proofs in lf and canonical lf .more ambitiously , we contemplate formalizing the meaning and correctness of metatheoretic reasoning about lf specifications ( as provided by the twelf system ) inside isabelle / hol , and extracting isabelle / hol theorems from twelf proofs .we are extremely grateful to bob harper for discussions about lf and the proof .benjamin pierce and stephanie weirich have also made helpful comments on drafts of this paper . ,michael , n. , stump , a. , and virga , r. 2003 . a trustworthy proof checker . _31 _ , 231260 . ,charguraud , a. , pierce , b. c. , pollack , r. , and weirich , s. 2008 .engineering formal metatheory . in _acm , 315 . ,bohannon , a. , fairbairn , m. , foster , j. n. , pierce , b. c. , sewell , p. , vytiniotis , d. , washburn , g. , weirich , s. , and zdancewic , s. 2005 .mechanized metatheory for the masses : the poplmark challenge . in _tphols_. 5065 .\2002 . executing higher order logic . in _ proc . of the international workshop on types for proofs and programs_. number 2277 in lncs .. nominal inversion principles . in _tphols_. 7185 .\2006 . completeness and herbrandtheorems for nominal logic ._ 71 , _ 1 , 299320 .\2009 . a simple nominal type theory . _ 228 _ , 3752 .lfmtp 08 : proceedings of the fourth international workshop on logical frameworks and meta - languages .parametric higher - order abstract syntax for mechanized semantics . in _icfp _ , j. hook and p. thiemann , eds .acm , 143156 .an algorithm for testing conversion in type theory . in_ logical frameworks _ , g. huet and g. plotkin , eds .cambridge university press , 255279 .lambda - calculus notation with nameless dummies , a tool for automatic formula manipulation ._ 34 , _ 5 , 381392 .\2002 . a new approach to abstract syntax with variable binding ._ 13 _ , 341363 . , miller , d. , and nadathur , g. 2008combining generic judgments with recursive definitions . in _lics_. 3344 .\1999 . some logical and syntactical observations concerning the first - order dependent type system ._ 9 , _ 4 , 335359 .2005a . justifying algorithms for --conversion . in _ fossacs _ , v. sassone , ed .lncs , vol .springer , 410424 .2005b . a syntactic approach to eta equality in type theory . in _acm , 7584 . ,honsell , f. , and plotkin , g. 1993 . a framework for defining logics ._ 40 , _ 1 , 143184 .mechanizing metatheory in a logical framework ._ 17 , _ 4 - 5 , 613673 .\2005 . on equivalence and canonical forms in the lf type theory ._ 6 , _ 1 , 61101 .some lambda calculus and type theory formalized ._ 23 , _ 3 - 4 , 373409 .formalising in nominal isabelle crary s completeness proof for equivalence checking . in _entcs , vol . 196 .proof - carrying code . in _acm , 106119 . ,paulson , l. c. , and wenzel , m. 2002 . .lncs , vol . 2283 . springer .proof pearl : de bruijn terms really do work . in _lncs , vol .springer , 207222 .logical frameworks . in _handbook of automated reasoning _ , j. a. robinson and a. voronkov , eds .elsevier and mit press , 10631147 .system description : twelf a meta - logical framework for deductive systems . in _lnai , vol .202206 .proof pearl : the power of higher - order encodings in the logical framework lf . in _tphols_. 246261 .nominal logic , a first order theory of names and binding ._ 183 _ , 165193 .\2006 . alpha - structural recursion and induction ._ 53 , _ 3 ( may ) , 459506 .\1995 . a verified typechecker . in _ tlca _ ,m. dezani - ciancaglini and g. d. plotkin , eds .lncs , vol .springer , 365380 .\2008 . structural logical relations . in _ lics_. ieee computer society , 6980 . , pitts , a. m. , and gabbay , m. j. 2003 .freshml : programming with binders made simple . in _eighth acm sigplan international conference on functional programming ( icfp 2003 ) , uppsala , sweden_. acm press , 263274 .reflections on trusting trust ._ 27 , _ 8 , 761763 .nominal techniques in isabelle / hol ._ 40 , _ 4 , 327356 . ,berghofer , s. , and norrish , m. 2007 .barendregt s variable convention in rule inductions . in _lnai , vol .4603 . 3550 . , cheney , j. , and berghofer , s. 2008 .mechanizing the metatheory of lf . in _ proceedings of the 23rd annual ieee symposium on logic in computer science ( lics 2008)_. 4556 .nominal techniques in isabelle / hol . in _lncs , vol .3632 . 3853 .revisiting cut - elimination : one difficult proof is really a proof . in _ rta _ , a. voronkov , ed .lecture notes in computer science , vol . 5117 .springer , 409424 . ,cervesato , i. , pfenning , f. , and walker , d. 2003 . a concurrent logical frameworki : judgments and properties . tech .cmu - cs-02 - 101 , carnegie mellon university . may . , stump , a. , and austin , e. 2009 . the calculus of nominal inductive constructions : an intensional approach to encoding name - bindings . in _lfmtp 09 : proceedings of the fourth international workshop on logical frameworks and meta - languages_. acm , new york , ny , usa , 7483 . received october 2009 ; revised april 2010 ; accepted april 2010if then and .
lf is a dependent type theory in which many other formal systems can be conveniently embedded . however , correct use of lf relies on nontrivial metatheoretic developments such as proofs of correctness of decision procedures for lf s judgments . although detailed informal proofs of these properties have been published , they have not been formally verified in a theorem prover . we have formalized these properties within isabelle / hol using the nominal datatype package , closely following a recent article by harper and pfenning . in the process , we identified and resolved a gap in one of the proofs and a small number of minor lacunae in others . we also formally derive a version of the type checking algorithm from which isabelle / hol can generate executable code . besides its intrinsic interest , our formalization provides a foundation for studying the adequacy of lf encodings , the correctness of twelf - style metatheoretic reasoning , and the metatheory of extensions to lf . [ lambda calculus and related systems ] this is a revised and expanded version of a conference paper . cheney was supported by a royal society university research fellowship and by epsrc grant gr / s63205/01 . urban was supported by an emmy noether grant from the dfg . corresponding author : j. cheney , informatics forum , 10 crichton street , edinburgh eh8 9ab , scotland , email : ` jcheney.ed.ac.uk ` .
because the fact that several dynamical systems exhibit a chaotic behavior , there has been much interest in the study of chaos . in recent years ,the trend of analysing the chaos moved to the new phase consisting of its control and utilization : this means on one hand to design suitable controls to eliminate the chaos , and on the other hand to generate it intentionally .our goal in this work is to carry out a rigorous mathematical analysis of dynamic behavior of the whole family of the generalized lorenz system in its chaotic regime by using time delayed feedback controlling forces in the same spirit of .the controller is a nonlinear function of the state variables of the system , therefore the results obtained in this paper can be considered in some way an improvement of the results of , where the authors study the generalized lorenz system with a linear version of the control proposed here .the global dynamics of the system depends on the parameter ] . in the last section of the paper we use the same techniquepointed out in , based on the center manifold reduction and the normal form theory , in order to determine the direction , stability and period of these periodic solutions which bifurcate from the steady state .this strategy permits to derive the explicit formulas for the properties of the hopf bifurcation .moreover , we give numerical simulations of the controlled system , which indicate that when the delay passes through certain critical values , the chaotic behavior is converted to stable periodic orbit for thw whole family of systems .the generalized lorenz system is described by the following system of ordinary differential equations for the state variables _ x , y , z _ : where ] . in the chaotic regime ,the system exhibits an irregular dynamics which makes its evolution unpredictable . for this reason ,our aim is to design a suitable control which regulates the system behaviour to any given point of the form * * **= , that is the form of the two nontrivial fixed points of the uncontrolled system .namely , by designing the control , where , , , , the above system is transformed into the closed - loop one : and it can be proved that and both converge to whereas converges to .+ unfortunately , the above proposed control does not take into account that the feedback physically enters into the system at a later time , thus in order to avoid this drawback , we consider the delayed feedback controller ] , as aspected since the control stabilizes the system .thus we have the following result .the equilibrium point of system is globally asymptotically stable when , for all ] and ^{2}-3\left[2\sigma b\left(2k_1-k_2\right)+\sigma^2\left(2b\gamma+2k_1 - 4k_2-\sigma^2\right)\right].\ ] ] 1 . if , then and is monotonically increasing . therefore , when and , has no positive roots and all the characteristic roots will remain to the left of the imaginary axis for all .if , since , there is at least one positive root of and the characteristic roots can cross the imaginary axis .if , then the graph of has critical points + + and , moreover , if and , then has positive roots . according to theorem 1 , stability switches are possible for each positive root of and the cross is from left to right if , and from right to left is .+ the characteristic quasi - polynomial for has the form =0\ ] ] where : + let , , be a positive root of , and . then satisfies , that is equivalent to the following system : by setting where + after simplification the above system implies that thus , for each positive root , it yields the following sequence of delays for which there are pure imaginary roots of : we numerically find that for all ] the smallest value is the critical value at which stability switch occurs from stable to unstable , so that the stability is lost at and for the solution remains unstable .+ _ remark_. by solving system numerically , we find for each value of the critical delay , in particular for , for and for . actually , as we can see in fig . , is a decreasing function of on the interval ] , as shown in figs.- ( b),(c ) , and this dynamic suggests that the system exhibits hopf bifurcation , even if there is a qualitative difference due to the fact that the threshold value of delay changes .we obtain the following result .[ transv_cond ] 1 .if and , then the equilibrium point remains asymptotically stable for all .2 . if either 1 . , or 2 . , and , + then there exist and as defined above such that the equilibrium point is asymptotically stable for . furthermore ,if , then the system undergoes a hopf bifurcation at the equilibrium when .* it remains to show the transversality condition for the hopf bifurcation holds at .+ we first set + so , differentiating eq. with respective to , we obtain ^{-1}=-\frac{p'(\lambda)}{\lambda q(\lambda)}+\frac{q'(\lambda)}{\lambda q(\lambda)}-\frac{\tau}{\lambda}=\frac{(3\lambda^2 + 2a_2\lambda+a_1)e^{\lambda\tau}}{\lambda(b_2\lambda^2+b_1\lambda+b_0 ) } + \frac{2b_2\lambda+b_1}{\lambda(b_2\lambda^2+b_1\lambda+b_0)}-\frac{\tau}{\lambda}.\ ] ] using eq. , we obtain ^{-1}_{\tau=\tau_c } & = re\left[-\frac{p'(\lambda)}{\lambda q(\lambda)}\right]_{\tau=\tau_c}+re\left[\frac{q'(\lambda)}{\lambda q(\lambda)}\right]_{\tau=\tau_c}\\ % & \frac{1}{b_1 ^ 2\nu_0 ^ 2+(b_0-b_2\nu_0 ^ 2)^2}\left{(a_1 - 3\nu_0 ^ 2)\nu_0\left[(b_0-b_2\nu_0 ^ 2)\sin(\nu_0\tau_c)-b_1\nu_0\cos(\nu_0\tau_c)\right]+ % \dots+2a_2\nu_0 ^ 2\left[(b_0-b_2\nu_0 ^ 2)\cos(\nu_0\tau_c)+b_1\nu_0\sin(\nu_0\tau_c)\right]-b_1 ^ 2\nu_0 ^ 2 + 2b_2\nu_0 ^ 2(b_0-b_2\nu_0 ^ 2)\right}\\ & = \frac{3\nu_0 ^ 6 + 2(a_2 ^ 2-b_2 ^ 2 - 2a_1)\nu_0 ^4+(a_1 ^ 2 - 2a_0 a_1-b_1 ^ 2 + 2b_0b_2)\nu_0 ^ 2}{b_1 ^ 2\nu_0 ^ 4+\nu_0 ^ 2(b_0-b_2\nu_0 ^ 2)^2}\\ & = \frac{f'(\nu_0 ^ 2)}{b_1 ^ 2\nu_0 ^ 2+(b_0-b_2\nu_0 ^ 2)^2}. \end{split}\ ] ] therefore {\tau=\tau_c}=signf'(\nu_0 ^ 2).\ ] ] if , the transversality condition holds and a hopf bifurcation occurs at . + _ remark . _ if , then . if , then characteristic equation has roots with positive real parts for and close to , but this contradicts the fact that is asimptotically stable for as in theorem .thus , , then .+ the expression of as a function of is quite cumbersome , but numerical simulations show that for all ] ( see fig.(b ) ) .moreover , it s easy to prove that and , thus the hypothesis ii is always verified and the system undergoes a hopf bifurcation ] as and for ,\mathbb{r}^3) ] as where . now , by the riesz representation theorem , there exists a function of bounded variation for ] .so , we define for ,\mathbb{r}^3) ] . for ,(\mathbb{r}^3)^*) ] + and the bilinear inner product where .then and are the adjoint operators with respect to the above bilinear form . by the discussion of the previous section , we know that are eigenvalues of , thus they are also eigenvalues of .we now need to compute the eigenvectors of and corresponding to and , respectively .+ suppose that , ] , is the eigenvector of corresponding to and we find : we can easily obtain } , \frac{b(i\nu_0-\sigma)}{2bx_r^2+rb^2-i\nu_0(rb+x_r^2 ) } \end{pmatrix } e^{is\nu_0\tau_c}\ ] ] the orthonormality condition , helps us determining the value of . by the definition we have }.\ ] ] furthermore , we also have that .+ the next step is to compute the coordinates to describe the center manifold at . let be a solution of eq. when .we define on the center manifold we have , where and and are the coordinates for the center manifold in the direction of and . since is real if is real , we consider only real solutions . for , since , we have with noticing , we have + + + + and similarly we have for .thus , from eq. we get comparing the coefficients with those of eq. , we obtain + & g_20=2|d_c(|q_2^ * q_3 e^-2i_c_0-|q_2^ * q_3+|q_3^ * q_2 ) + & g_11=2|d_c|q_3^ * re(q_2 ) + & g_02=2|d_c(|q_2^ * |q_3 e^2i_c_0-|q_2^ * |q_3+|q_3^ * |q_2 ) +\ ] ] & + 2|q_2^*|d_c+ + & + 2|q_3^*|d_c .since there are and in , we still need to compute them . from and have : where expanding the above series and comparing the corrisponding coefficients , we get from eq . we know that for , comparing the coefficients with those in eq. gives that and from and and the definition of a , it follows that hence and similarly , from and we get where are constant vectors . now it remains to determine appropriate values for and . from the definition of a and , we obtain and by we have : and substituting and into and noticing that and we obtain and similarly , substituting and into we can get : where + thus , we can determine the coefficients , and .therefore , each i d determined by the parameters and the delay in .so , we can compute the following values : +\frac{g_{21}}{2 } , \qquad \mu_2=-\frac{re\{c_1(0)\}}{re\{\lambda ' ( \tau_c)\}}\ ] ] which determine the quantities of bifurcating periodic solutions in the center manifold at the critical value of delay .the sign of determines the direction of hopf bifurcation : if , then the bifurcating periodic solutions exist for and the bifurcation is supercritical ( subcritical ) .the quantity determines the stability of the bifurcating periodic solutions , i.e. they are stable ( unstable ) for .assume that conditions of theorem hold. then 1 .if , then there exist periodic solutions bifurcating from for , and they are orbitally asymptotically stable as ; 2 . if , then there exist periodic solutions bifurcating from for , and they are orbitally asymptotically stable as . _remark_. though the complexity of the expression of will not allow a direct study of its sign as a function of , we show the numerical simulations performed : is negative for all ] ( see fig. ( b ) ) , as aspected because .so we can conclude that the bifurcation arising from system is supercritical and the periodic solutions are stable for all ] we obtain the particular system of the family .we have used a feedback technique with a non linear control to achieve the stabilization of the steady states and we have carried out a mathematical analysis of the global dynamics of the system and studied the dependence on the time delay and the characteristic parameter , also investigating the existence of stability switches .we have also shown that the time delay can destabilize the steady states and lead to periodic solutions through hopf bifurcation . by using the the normal form theory and center manifold argument, we determine direction , stability and period of these periodic solutions and also prove that they are stable and the bifurcation is supercritical for all $ ] . k.l .cooke and p. van den driessche , on zeros of some trascendental equations , funkcial .29 , 77 - 90 ( 1986 ) . k.l.cooke andz.grossman , discrete delay , distributed delay and stability switches , j.math.anal.appl .86 , 592 - 627 ( 1982 ) .b.hassard , n.kazaeinoff , y.wan , theory and applications of hopf bifurcation , cambridge university press ( 1981 ) .j.murray , mathematical biology , springer ( 1993 ) .m.y.li , h.shu , global dynmics of a mathematical model for htlv - i infection of t cells with delayed ctl response , nonlin .anal.:real world appl .13,1080 - 1092 ( 2012 ) .y.song , j.wei , bifurcation analysis for chen s system with delayed feedback and its application to control of chaos , chaos , solitons and fractals 22 , 75 - 91 ( 2004 ) .m.c.lombardo , m.sammartino , delayed feedback control for the benard problem , proceedings `` wascom 2003 '' 12th conference n waves and stability in continuous media .g.gambino , m.c.lombardo,m.sammartino , global linear feedback control for the generalized lorenz system , chaos , solitons and fractals 29 , 829 - 837 ( 2006 ) .j. lu , g. chen , d. cheng , s. celikovsky , bridge the gap between the lorenz system and the chen system .int j bifurcat chaos 12 ( 2002 ) .f. gao , w.q .liu , v. sreeram , k.l .teo , nonlinear feedback control for the lorenz system , dynamics and control 11 , 57 - 69 ( 2001 ) .saker , stability and hopf bifurcations of nonlinear delay malaria epidemic model , nonlinear analysis : real world applications 11 ( 2 ) , 784 - 799 ( 2010 ) .j. e. marsden , m. mccracken , the hopf bifurcation and its applications , pringer ( 1976 ) .t. kalmar - nagy , g. stepan , f.c .moon , subcritical hopf bifurcation in the delay equation model for machine tool vibrations , nonlinear dynamics 26 , 121 - 142 ( 2001 ) .
in this work we propose a feedback approach to regulate the chaotic behavior of the whole family of the generalized lorenz system , by designing a nonlinear delayed feedback control . we first study the effect of the delay on the dynamics of the system and we investigate the existence of hopf bifurcations . then , by using the center manifold reduction technique and the normal form theory , we derive the explicit formulas for the direction , stability and period of the periodic solutions bifurcating from the steady state at certain critical values of the delay .
the idea that gravitational field equations could be interpreted using ( or derived from ) thermodynamic arguments has been explored by many people from widely different perspectives .( see e.g., ) .there is a tendency in the literature to club together these very different attempts as essentially the same or , at least , as very similar .such a point of view is technically incorrect , and given this tendency , it is useful to clarify the differences between the various approaches , as regards their assumptions , physical motivation and the generality of the results .i will begin with a series of comments aimed at this task : * to begin with , one must sharply distinguish between ( i ) the attempts concerned with the _ derivation _ of the field equations by thermodynamic arguments ( like e.g. , ) and ( ii ) the attempts related to the _ interpretation _ of the field equations in a thermodynamic language ( like e.g. , ) .the latter is as important as the former because the existence of a purely thermodynamic interpretation for the field equations is vital for the overall consistency of the programme .it is rather self - defeating to derive the field equations from thermodynamic arguments and then interpret them in the usual geometrical language !if gravity is thermodynamic in nature , then the gravitational field equations must be expressible in a _thermodynamic _ language .this crucial feature has not been given due recognition in the literature .unless the final result has an interpretation in thermodynamic language , such a derivation of the field equations is conceptually rather incongruous . + as an example of what i mean by such an interpretation , let me recall the following result .it can be shown that the evolution of geometry can be interpreted in thermodynamic terms , as the heating and cooling of null surfaces , through the equation : where are the degrees of freedom in the surface and bulk of a 3-dimensional region and is the average davies - unruh temperature of the boundary .the is the induced metric on the constant surface , , and is the proper - time evolution vector corresponding to observers moving with four - velocity .the factor ensures the correct result for either sign of the komar energy .the time evolution of the metric in a region ( described by the left hand side of ) , can be interpreted as the heating / cooling of the spacetime and arises because . in any _ static _ spacetime , on the other hand , , leading to `` holographic equipartition '' : .this result translates gravitational dynamics into the thermal evolution of the spacetime .the validity of for all observers ( i.e. , foliations ) ensures the validity of einstein s equations .+ in fact , _ no _ thermodynamic derivation of the field equations in the literature actually obtains the _ tensorial _ form of field equation . what isalways done _ is to obtain an equation of the form ( where is either a timelike or null vector ) and postulate its validity for all .so it is important to understand the physical meaning of such an equation , especially the left hand side , for a given class of .this will be a recurrent theme which i will elaborate on later sections . *many thermodynamic derivations of field equations available in the literature , work with the assumption that the entropy of a horizon is proportional to its area ( e.g. , ) and attempt to introduce thermodynamic arguments centered around it ._ it is likely that such derivations miss some essential physics . _ the connection between gravity and thermodynamics , motivated historically from the laws of black hole mechanics and the membrane paradigm , transcends einstein s theory . in a more general class of theories ,the ( wald ) entropy of the horizon is _ not _ proportional to its area .one should therefore distinguish approaches in this subject which are specially tuned to einstein gravity ( and uses the entropy - area proportionality ) from a broader class of approaches ( like e.g. ) because the latter ones , being more general , probably capture the underlying physics better .the above criticism is also valid for approaches based on entanglement entropy when it is assumed to be proportional to the horizon area . *another feature which distinguishes different approaches in the literature is whether the field equations are derived from a variational principle or from some other procedure .i am personally in favour of approaches which use a variational principle because they could offer a better window into microscopic physics .what is more , the approaches which does _ not _ use a variational principle are very limited in their scope .for example , it is virtually impossible to generalize such models beyond einstein s theory .( in contrast , the very first approach which used a thermodynamic variational principle to derive the field equations , obtained the field equations for all models at one go . ) some of these approaches like , for example , those which use the raychaudhuri equation also have non - trivial technical issues .+ even amongst the approaches which use variational principles , we need to distinguish between ( i ) those which vary the geometry ( viz ., the metric in some form , sometimes in a rather disguised manner ) and ( ii ) those which vary some auxiliary vector field , keeping the metric fixed .many approaches involving holographic concepts and entanglement entropy do vary the geometry in some form ; however , i prefer approaches which vary an auxiliary vector .( after all , if you are going to vary the metric / geometry in an extremum principle , why not just use the einstein - hilbert action and be done with it ? ! ) an example of an extremum principle which does _ not _ vary the metric , is given by the functional \right ) \label{qtotyy}\ ] ] here , and are the shear and expansion of a null congruence , are the shear and bulk viscous coefficients of a null fluid and the integrand can be interpreted as the rate of generation of heat ( ` dissipation without dissipation ' ; see ) due to matter and gravity on a null surface . varying with respect to and demanding that the extremum should hold for all ( i.e. , for all null surfaces ) will lead to einstein s equations .( we will say more about this in sec .[ sec : dwd ] . ) such a variational principle and others of a similar genre which we will discuss later treat the geometry as fixed and does not vary the metric .* at a more fundamental level , the horizon entropy can not be finite unless some kind of discreteness exists in the spacetime near planck scales .this is clear in the case of entanglement entropy , which is a manifestly divergent quantity ( see e.g. , ) and needs to be regularized by some ad - hoc cut - off ; but it is implicit in all approaches .so , unless we have a model which captures _ at least some of _ the quantum gravitational effects on the spacetime , any derivation of the field equations using a finite value for entropy is , at best , incomplete .* finally , let me emphasize that _ gravity can not be an entropic force_. this was ably demonstrated by matt visser by an argument which uses ( essentially ) elementary vector analysis .it is trivial to prove , in the newtonian limit , that a conservative force can not , in general , be expressed in the entropic form if is the davies - unruh temperature that depends on the magnitude of the acceleration .the relation implies that the level surfaces of coincide with those of , allowing us to introduce a function .this , in turn , implies that and hence the level surface of coincide with the level surfaces of .but since depends only on , this requires the level surfaces of to coincide with those of .this condition is , in general , impossible to satisfy and can happen only in situations of high symmetry ( for example , spherical , cylindrical , planar etc . ). it would be preferable if the phrase `` entropic gravity '' is _ not used as a rather generic term _ to describe the different approaches in this subject , for the simple reason that gravity can not be an entropic force . to summarize , there exist many different attempts in the literature to link gravity and thermodynamics .all of these are _ not _ equivalent either conceptually or technically and it is also likely that at least some of them are fundamentally flawed or incomplete .the approach i have been pursuing which i will describe here is marked by the following features : ( 1 ) much of it works for a wide class of theories , more general than einstein s gravity .in particular , the results hold for theories in which entropy is _ not _ proportional to horizon area .( 2 ) the field equations are derived from a thermodynamic extremum principle in which the geometry is not varied but some other auxiliary vector field is varied .( 3 ) the resulting field equations are interpreted in a thermodynamic language and not in a geometric language .( 4 ) the introduction of a zero - point length to the spacetime by quantum gravitational effects allows us to provide a microscopic basis for the variational principle which is used . here, i will concentrate on developing this perspective _ from first principles _ in a streamlined manner .obviously this will require us to make some educated guesses but i shall argue that these guesses are well - motivated and the results are quite rewarding . in particular , i will describe the following two aspects : * i will demonstrate a deep connection between two aspects of gravity which are usually considered in the literature to be quite distinct .the first is the fact that gravity seems to be immune to the shift in the zero level of the energy , i.e , to the shift in the value of cosmological constant .second is the feature i mentioned above , viz . , gravitational dynamics can be reinterpreted in a purely thermodynamic language .i will show how the first feature _ leads to _ the second and , in fact , provides a simple and natural motivation to consider the heat density of the null surfaces as a key physical entity .* much of the previous work treated the spacetime as analogous to a fluid and investigated its properties in the _thermodynamic limit_. the next , deeper , level of description of a fluid will be the _ kinetic theory _ which recognizes the discreteness and quantifies it in terms of a distribution function for its molecules .i will describe an attempt to do the same for the spacetime by introducing a distribution function for the atoms of spacetime ( which will count the microscopic degrees of freedom ) and relating it to the extremum principle which , in turn , will lead to the field equations .the principle of equivalence , along with principle of general covariance , strongly suggest that gravity is the manifestation of a curved spacetime and will set so that .occasionally , i will also set when no confusion is likely to arise .] , described by a non - trivial metric .the _ kinematics _ of gravity , viz .how a given gravitational field affects matter , can then be determined by postulating the validity of special relativistic dynamics in all freely falling frames .this will lead to the condition for the energy momentum tensor of matter , which encodes the influence of gravity on matter .unfortunately , we do not have any equally elegant guiding principle to determine the _ dynamics _ of gravity , viz . how matter determines the evolution of the spacetime metric .the dynamics is contained in the gravitational field equation , which in einstein s theory is assumed to be given by .( in a more general class of theories , like e.g , models , the left hand side will be replaced by a more complicated second rank , symmetric , divergence - free tensor . )one can obtain this equation , as einstein did , by ( i ) assuming that the right hand side _ must _ be and ( ii ) by constructing a second rank , symmetric , divergence - free tensor from the metric containing upto second derivatives .alternatively , as hilbert did , one can write down a suitable scalar lagrangian and vary it with respect to the metric and obtain the field equations . in either procedure, one tacitly assumes that the spacetime metric is a dynamical variable with a status similar to , say , that of the gauge potential in electromagnetism .this belief is based on the fact that einstein s equation is a second order differential equation for the metric just as maxwell s equation is a second order differential equation for .it is in the same spirit that we justify varying the metric in the hilbert action ( as analogous to varying in the electromagnetic action ) to obtain einstein s equation .further , once we have a classical action ] ( or in some other equivalent manner ) with the metric playing the role of a quantum variable .but , given the fact that spacetime geometry is conceptually very different from an external field propagating in it , this assumption viz ., that the metric is a dynamical variable similar to other fields is indeed nontrivial .further , if varying the metric in the hilbert action is _ not _ the appropriate way to obtain the classical theory , then one is forced to think afresh about all the quantum gravity programmes .interestingly enough , this textbook procedure of treating the metric as a dynamical variable , accepted without a second thought is by no means a unique way to obtain einstein s equation .in fact , it is probably _ not _ the most natural or efficient procedure .one can come up with alternative approaches _ and physically motivated extremum principles , _ leading to einstein s equation , in which the metric is _ not _ a dynamical variable .let me describe one such approach .the field equations we seek should be a relativistic generalization of newton s law of gravity . a natural way of generalizing this law is to begin by noticing that : ( i ) the energy density in the right hand side is foliation / observer dependent where is the four velocity of an observer .there is no way we can keep out of it .( ii ) we know from the principle of equivalence that plays the role of .so a covariant , scalar generalization of the left hand side , , could come from the curvature tensor which contains the second derivatives of the metric .any such generalization _ must depend on the four - velocity of the observer _ since the right hand side does .( iii ) it is perfectly acceptable for the left hand side _ not _ to have second _ time _ derivatives of the metric , in the rest frame of the observer , since they do not occur in . to obtain a scalar analogous to , having _ spatial _ second derivatives , we first project the indices of to the space orthogonal to , using the projection tensor , thereby obtaining the tensor .the only scalar we can get from is where can be thought of as the radius of curvature of the space . and should _ not _ to be confused with the curvature tensor and the curvature scalar of the 3-space orthogonal to . ] the natural generalization of newton s law then given by .working out the left hand side ( see e.g. , p. 259 of ) and fixing the proportionality constant from the newtonian limit , one finds that if this scalar equation should hold for all observers ( general covariance ) then we need which is the standard result . demanding that holds for each observer , captures the geometric statement viz . that energy density curves space as viewed by any observer in a nice manner and is indeed the most natural generalization of newton s law : .so , in this approach , the fundamental equation determining the geometry is which should hold for all normalized , timelike vectors at each event of spacetime rather than the standard equation . while the two formulations are algebraically equivalent , they are conceptually rather different . in the conventional approach to derive , we do not invoke any special class of observersinstead , we _ assume _ that the right hand side of the field equation _ must _ be and look for a generally covariant , divergence - free , second - rank tensor built from geometry to put on the left hand side .( alternatively , we look for a scalar lagrangian made from geometrical variables ) .but the source in newtonian gravity is actually which _ does _ involve an extra four - velocity for its definition . if we introduce observers with four - velocity and in the end demand that the equation should hold for all we obtain the same gravitational field equations by a different route .this approach to _ dynamics _ brings it closer to the way we handled the _kinematics _ by introducing the freely falling observers .the real importance of this approach stems from the fact that it allows us to construct a different kind of extremum principles which will lead to the gravitational field equations , _ without treating the metric as a dynamical variable ! _since this approach introduces an extra vector field into the fray , one can consider an extremum principle in which we vary which makes physical sense in terms of changing the observer instead of the metric .it is now possible , for example , to obtain the field equations by varying in a variational principle with the lagrangian and demanding that the extremum must hold for all .in fact , we can also use the lagrangian .varying in the resulting action , after imposing the constraint and demanding that the extremum should hold for all , will lead to the equation where is the lagrange multiplier . using the bianchi identity and , we will recover the field equations except for an undetermined cosmological constant . removing a total divergence from , we see that this is equivalent to a variational principle based on the functional term ) but , of course , here we are varying the vector field and _ not _ the metric .in fact , one can add any functional of the metric to the lagrangian and it would make no difference since the metric is not varied . ] = \int d^4 x \ , \sqrt{-g}\ , \left[(\nabla_iu^i)^2 - \nabla_ju^i\nabla_iu^j - 8\pi \rho\right ] \label{tracekaction}\ ] ] varying in ] is not obvious .our next task is to clarify that . in the case of an ideal fluid , with ,the combination is actually the _ heat density _ where is the temperature and is the entropy density of the fluid .( the last equality follows from gibbs - duhem relation and we have chosen the null vector with for simplicity . )the invariance of under ( constant ) reflects the fact that the cosmological constant , with the equation of state , has zero heat density .our guiding principle , as well as , suggests that _ it is the heat density rather than the energy density which is the source of gravity .but is the energy density for _ any _ kind of , not just for that of an ideal fluid .how do we interpret in a general context when could describe any kind of source not necessarily a fluid for which concepts like temperature and entropy do not exist intrinsically ?_ remarkably enough , this can be done_. in any spacetime , around any event , there exists a class of observers ( called a local rindler observers ) who will interpret as the heat density contributed by the matter to a null surface which they perceive as a horizon .this _ motivates _ us to introduce the concept of local rindler frame ( lrf ) and local rindler observers which will allow us to provide a thermodynamic interpretation of for any .this arises as follows : .light rays travelling at 45 degrees in the local inertial frame define the light cones at .( b ) right : a local rindler observer who is accelerating with respect to the inertial observer . for a sufficiently large acceleration, the trajectory of such an observer will be close to the light cones emanating from .the local rindler observer will perceive the light cone as a local rindler horizon and attribute to it a temperature given by .in other words , the vacuum fluctuations of the local inertial frame will appear as thermal fluctuations in the local rindler frame . ] in a region around any event , we first introduce the freely falling frame ( fff ) with coordinates .next , we boost from the fff to a local rindler frame ( lrf ) with coordinates constructed using some acceleration , through the transformations : when and similarly for other wedges .one of the null surfaces passing though , will get mapped to the surface in the fff and will act as a patch of horizon to the constant rindler observers .this construction leads to a beautiful result in quantum field theory .the local vacuum state , defined by the freely - falling observers around an event , will appear as a thermal state to the local rindler observers with the temperature : where is the acceleration of the local rindler observer , which can be related to other geometrical variables of the spacetime in different contexts [ see fig .[ fig : daviesunruh ] ] .the existence of the davies unruh temperature tells us that around _any _ event , in _ any _ spacetime , there exists a class of observers who will perceive the spacetime as hot .let us now consider the flow of energy associated with the matter that crosses the null surface .nothing unusual happens when this is viewed in the fff by the locally inertial observer .but the local rindler observer attributes a temperature to the horizon and views it as a hot surface .such an observer will interpret the energy , dumped on the horizon , by the matter that crosses the null surface , as energy added to a _ hot _ surface , thereby contributing a _ heat _content .( recall that , as seen by the outside observer , matter actually takes an infinite amount of time to cross a _ black hole _ horizon , thereby allowing for thermalization to take place .similarly , a local rindler observer will find that the matter takes a very long time to cross the horizon . ) to compute in terms of , note that the lrf provides us with an approximate killing vector field , generating the lorentz boosts , which coincides with a suitably defined null normal at the null surface .the heat current arises from the energy current of matter and hence the total heat energy dumped on the null surface will be : where we have used the result that on the null surface .so we find that constant and treating the light cone as the degenerate limit of the hyperboloids .we set and take the corresponding limit .the motivation for this choice will become clearer later on . ]\frac{dq_{m}}{\sqrt{\gamma}d^{2}xd\lambda}=t_{ab } \ell^a\ell^b \label{hmatter}\ ] ] can indeed be interpreted as the heat density ( energy per unit area per unit affine time ) of the null surface , contributed by matter crossing a local rindler horizon , as interpreted by the local rindler observer .this interpretation works in the lrf irrespective of the nature of .so , the need to work with , forced on us by our guiding principle , _ leads to _ the introduction of local rindler observers in order to interpret this quantity as the heat density .there is an alternative interpretation of which will prove to be useful .since the parameter ( defined through ) is similar to a time coordinate , we can also think of ] on all the null surfaces ..i do _ not _ introduce the notion of entropy for the rindler horizon ( as proportional to its area ) or work with its variation . ]let us get back to the task of constructing an extremum principle from which we can obtain the field equations .we have argued that the matter sector appears in the extremum principle through the combination which has the interpretation of the heat density ( or the heating rate ) contributed to a null surface by the matter crossing it .we also saw that we can not vary the metric in the extremum principle .but in any variational principle constructed from , we now have the option of varying and leaving the metric alone .such a variational principle can take the form : +\mathcal{h}_g[x^i , \ell_a]\right ) ; \qquad \mathcal{h}_m[x^i , \ell_a ] \equiv t_{ab}(x ) \ell^a \ell^b \label{qtot1}\ ] ] where we will interpret as the contribution to the heat density from the microscopic degrees of freedom of geometry ( ` atoms of space ' ; i will use these two phrases interchangeably ) .this should depend on both and for the variational principle to be well defined .the success of this approach depends on our coming up with a candidate for ] through a functional fourier transform with respect to a lagrange multiplier field ] .integrating over , treating it as a rapidly varying fluctuation ( with ] such that is the same as .i will describe this in detail elsewhere . ] on shell , when the equations of motion hold , the two terms in the curly brackets in cancel each other and the net heat density has the planckian value , which , of course , has no gravitational effect .but it tells us that there _ is _ a zero - point contribution to the degrees of freedom in spacetime , which , in dimensionless form , is just unity .therefore , it makes sense to ascribe degrees of freedom to an area , which is consistent with what we know from earlier results in this subject .so a two - sphere radius has , which was the crucial input that was used in a previous work to determine the numerical value of the for our universe .( this is similar to assigning molecules to a phase volume . in kinetic theory, we do not worry about the fact that is not always an integer .in the same spirit , we are not concerned by the fact that is not an integer . ) .thus , the microscopic description does allow us to determine the value of the , ( which arose as an integration constant ) , as it should in any complete description .let me elaborate a little bit on this aspect , since it can provide a solution to what is usually considered the most challenging problem of theoretical physics today .observations suggest that our universe has three distinct phases of expansion : ( i ) an inflationary phase with an approximately constant density , fairly early on .( ii ) a phase dominated , first by radiation and then by matter , with ] completely specify the dynamics of our universe and act as its signature . of these , and can , in principle , be determined by standard high energy physics .but we need a _ new _ principle to fix the value of , which is related to the integration constant that appears in the field equations in our approach .it turns out that such a universe , with these three phases , harbors a new _ conserved _ quantity , which is the number of length scales ( or radial geodesics ) , that cross the hubble radius during any of these phases .any physical principle which can determine the value of during the radiation - matter dominated phase , say , will fix the value of in terms of $ ] . taking the modes into the very early phase, we can fix the value of this conserved quantity at the planck scale , as the degrees of freedom in a two - sphere of radius .in other words , we take .this , in turn , leads to a remarkable prediction relating the three densities : from cosmological observations , we know that ; if we assume that the range of the inflationary energy scale is gev , we get , which is consistent with the observations !this approach for solving the cosmological constant problem provides a unified view of cosmic evolution , connecting the three phases through in contrast with standard cosmology in which the three phases are joined together in an unrelated , _ ad hoc _ manner .moreover , this approach to the cosmological constant problem _ makes a falsifiable prediction _, unlike any other approach i am aware of . from the observed values of and , we can _ predict _ the energy scale of inflation within a very narrow band to within a factor of about five if we include the ambiguities associated with reheating .if future observations show that inflation occurred at energy scales outside the band of gev , our model for explaining the value of is ruled out .let me reiterate the logical sequence described in this work which leads to a completely different perspective on gravity and the derivation of its field equations .* we postulate that the gravitational field equations should arise from an extremum principle which remains invariant under the transformation (constant ) .* this leads to two conclusions : ( a ) the metric tensor can not be a dynamical variable which is varied in the extremum principle .( b ) the should appear in the extremum principle though the combination where is a null vector .* we next look for a physical interpretation of for an arbitrary and find that , in any spacetime , the local rindler observers will interpret as the heat density contributed to a null surface by the matter crossing it .this interpretation works for _ any _ and provides a strong _ motivation _ for introducing local rindler observers in the spacetime . * since ( i ) the metric can not be a dynamical variable and ( ii ) we now have the auxiliary null vector field arising through , we look for an extremum principle in which is varied .the extremum should hold for all null vectors at any event and constrain the background metric .* we take the integrand of the extremum principle to be where is interpreted as the heat density due to the microscopic spacetime degrees of freedom .we have introduced a length scale from dimensional considerations and is the dimensionless count of the microscopic degrees of freedom . *a discrete count for the microscopic degrees of freedom implies a discrete nature for spacetime at planck scales .we incorporate this fact by an effective , renormalized / dressed metric which ensures that the geodesic distance of the effective metric has a zero - point length and is given by where is the geodesic distance of the classical spacetime .we take .* the number of spacetime degrees of freedom , at an event , is taken to be proportional to the area of an equi - geodesic surface centered at that event in the limit of vanishing geodesic distance ( see ) . with this choice ,one obtains an extremum principle based on . varying this with respect to all null vectors and demanding the equation to hold for all at an event leads to einstein s equation with an undetermined cosmological constant . *when equations of motion hold , we can assign degrees of freedom with every area element in spacetime .this , in turn , allows us to fix the value of the undetermined correctly and _ provides a solution to the cosmological constant problem_.the approach outlined here is based on the idea that gravity is the thermodynamic limit of the statistical mechanics of certain microscopic degrees of freedom ( ` atoms of space ' ) . in the thermodynamic limit, deriving the field equations of classical gravity is algebraically straightforward one might even say trivial , but let us not shun simplicity !it is obtained from an extremum principle based on the functional : varying in with the constraint that , and demanding that the result holds for all will lead to einstein s equation with an arbitrary cosmological constant .the approach works for a wild class of gravitational theories including the models . as far as classical gravity goes , that is the end of the story .but we could enquire about the physical meaning of this extremum principle .the combination is invariant under the shift ( constant ) which was the original reason to put it in the variational principle .postulating that the extremum principle must be invariant under the transformation (constant ) naturally _ leads to _ the introduction of in the variational principle and to the local rindler observers for its interpretation . as i mentioned earlier ,this by itself is a valuable insight and shows the connection between two features viz .the immunity of gravity to shifts in the cosmological constant , and the thermodynamic interpretation of gravity which were considered as quite distinct .further , _ does _ have the interpretation as the heat density of matter on any null surface .it is also possible to interpret as the heating rate ( `` dissipation without dissipation '' ; see sec .[ sec : dwd ] ) of the null surface , thereby providing a purely thermodynamic underpinning for classical gravity .none of this poses any conceptual or technical problem .the real issues arise when we try to go beyond the classical theory and provide a semi - classical or quantum gravitational interpretation of this result .the description in terms of the expression in is approximate and only valid when .the entire description , based on the q - metric acting as a proxy for the effective spacetime metric , can not be trusted too close to planck scales .we expect it to capture the quantum gravitational effects to a certain extent , but at present we have no way of quantifying the accuracy of this approach .( it should also be noted that the identification might have further corrections close to planck scales . )we also do not know how to deal with matter fields in a quantum spacetime and it is not clear how to introduce in a systematic way . in fact , the _ only _ reason to vary the metric tensor in an extremum principle a procedure which i have argued against is to obtain classically and semi - classically .none of the thermodynamic derivations , which leads to the exact ( rather than linearized ) field equations , obtains ( or ) from fundamental considerations based on a matter action .it is ironic that the problem arises from the matter sector rather than from the gravity sector !the ideas outlined in this work suggest a more radical solution . matter , as we understand it , is quantum mechanical ; which essentially means that it is made of _ discrete _ degrees of freedom .how does such a _ discrete _ structure end up curving the _ continuum _ geometry ? that is , what is the actual _ mechanism _ by which produces ?the approach developed here suggests that one needs to introduce certain `` hidden variables '' , viz ., the auxiliary vector field , which encodes the discrete nature of geometry , and couple it to ( which is also fundamentally discrete in nature ) .the continuum geometry has to be related to , at scales much larger than the planck scale , by a suitable approximation just as the continuum density or pressure of a fluid arises when we average over the discreteness of the molecules .i have outlined one possible way in which this idea could be implemented , but it is by no means unique .further exploration of this approach could lead to a better understanding of how matter _ really _ ends up curving the spacetime .my research work is partially supported by j.c.bose fellowship of department of science and technology , india .i thank sumanta chakraborty , sunu engineer , dawood kothawala and kinjalk lochan for discussions and comments on the first draft .parattu k. , b. r. majhi , and t. padmanabhan , 2013 `` the structure of the gravitational action and its relation with horizon thermodynamics and emergent gravity paradigm '' _ phys .* 87 * , 124011 [ arxiv:1303.1535 ] .chakraborty s. , t. padmanabhan , `` evolution of spacetime arises due to the departure from holographic equipartition in all lanczos - lovelock theories of gravity '' , 2014 _ phys ., _ * d 90 * , 124017 [ arxiv:1408.4679 ]
i clarify the differences between various approaches in the literature which attempt to link gravity and thermodynamics . i then describe a new perspective based on the following features : ( 1 ) as in the case of any other matter field , the gravitational field equations should also remain unchanged if a constant is added to the lagrangian ; in other words , the field equations of gravity should remain invariant under the transformation (constant ) . ( 2 ) each event of spacetime has a certain number ( ) of microscopic degrees of freedom ( ` atoms of spacetime ' ) . this quantity is proportional to the area measure of an equi - geodesic surface , centered at that event , when the geodesic distance tends to zero . the spacetime should have a zero - point length in order for to remain finite . ( 3 ) the dynamics is determined by extremizing the heat density at all events of the spacetime . the heat density is the sum of a part contributed by matter and a part contributed by the atoms of spacetime , with the latter being . the implications of this approach are discussed .
looking at a hamiltonian formulation of relativistic dynamics , dirac was led to consider various forms , depending on the symmetry properties of the hypersurface that is chosen in this order .accordingly , the generators of the poincar algebra drop into dynamical or kinematic ones . among the different forms ,the point - form approach , which is based on a hyperboloid surface , , is probably the most aesthetic one in the sense that the space - time displacement operators , , are the only ones to contain the interaction while the boost and rotation operators , altogether , have a kinematic character .this approach is also the less known one , perhaps because dealing with a hyperboloid surface is not so easy as working with the hyperplanes that underly the other forms ( instant and front ) .it nevertheless received some attention recently within the framework of relativistic quantum mechanics ( rqm ) . due to the kinematic character of boosts, its application to the calculation of form factors can be easily performed _ a priori _ and , moreover , these quantities generally evidence the important property of being lorentz invariant . a `` point - form '' ( `` p.f . '' )approach has been successfully used for the calculation of the nucleon form factors .it however fails in reproducing the form factor of much simpler systems , including the pion .the asymptotic behavior is missed and the drop - off at small is too fast in the case of a strongly - bound system . analyzing the results , it was found that this `` point - form '' , where the dynamical or kinematic character of the poincar generators is the same as for dirac s one , implies physics described on hyperplanes .this approach is nothing but that one presented by bakamjian as being an `` instant form which displays the symmetry properties inherently present in the point form '' .sokolov mentioned it was involving `` hyperplanes orthogonal to the 4-velocity of the system '' under consideration , adding it was not identical to the point form proposed by dirac .developing an approach more in the spirit of the original one therefore remains to be made . in this contribution , we present an exploratory work that is motivated by dirac s point form and , consequently , implies physics described on hyperboloid - type surfaces . due to the lack of space ,we only consider here the main points while details can be found in ref . . how this new approach does for hadron form factors is briefly mentioned .each rqm approach is characterized by the relation that the momenta of a system and its constituents fulfill off - energy shell .this one is determined by the symmetry properties of the hypersurface which the physics is formulated on . in absence of particular direction on a hyperboloid - type surface, it necessarily takes the form of a lorentz scalar .one should also recover the momentum conservation in the non - relativistic limit , .thus , for the two - body system we are considering here , the expected relation could read : such a constraint is obtained from integrating plane waves on the hypersurface , : where and are replaced by and , and satisfies . to understand the ingredients entering the l.h.s . of the above equation ,the `` time '' evolution should be examined .this goes beyond considering the upper part of a hyperboloid surface often mentioned in the literature .interestingly , eq .( [ scalar ] ) can be cast into the following form : which is very similar to a front - form one , but the unit vector , , has no fixed direction .the next step consists in considering a wave equation , which can be obtained from taking the square of the momentum operator , : where .one should determine under which conditions it admits solutions verifying eq .( [ scalar ] ) and , at the same time , leads to a relevant mass operator . with this aim ,we assume , from which we get : this relation shows that the orientation of is conserved , which greatly facilitates the search for a solution . while doing so , a lorentz - type transformation adapted from the bakamjian - thomas one has to be made .the constituent momenta are expressed in terms of the total momentum , , and the internal variable , , while verifying eq .( [ scalar ] ) .moreover , the interaction is assumed to fulfill constraints but these ones , which amount to take into account higher - order meson - exchange contributions , are actually well known as part of the general construction of the poincar algebra in rqm approaches .the present point form implies that the system described in this way evidences a new degree of freedom . in the c.m ., a zero total momentum is obtained by adding the individual contributions of constituents and an interaction one , consistently with the fact that contains the interaction .the configuration so obtained points isotropically to all directions as sketched in fig . 2 of ref .this new degree of freedom appears explicitly in the definition of the norm , beside the integration on the internal variable : where represents a solution of a mass operator .another aspect of the present point form concerns the velocity operator , , entering the construction of the poincar algebra , and the corresponding .their expressions , which differ from earlier ones , read : despite unusual features , the present point form can be consistently developed .it evidences similarities with the instant and front forms in that the hypersurface it is formulated on is independent of the system under consideration , which is at the origin of the above constraints .these ones are absent in the earlier `` point form '' where the kinematic character of boosts is trivial , the operation affecting both the system and the hyperplane used for its description , their respective velocity and orientation being related .for a part , the present work was motivated by the drawbacks that an earlier point form evidences for the form factors of strongly - bound systems calculated in the single - particle current approximation .some results are presented in fig .[ fig1 ] for the pion charge form factor . at high , the new point form ( d.p.f . ) shows a behavior , like the instant- and front - form results , while the earlier point form ( `` p.f . '' ) is providing a one .the change in the power law is largely due to the form of the velocity operator , eq .( [ velocity ] ) , which , containing some dependence on , makes less difficult to match the initial- and final - state momenta with those of the struck constituent . at low , the new point form does better than the earlier one but the improvement is not impressive .more important however , the bad behavior is shared by results obtained in the instant and front forms with parallel kinematics ( i.f.+f.f.(parallel ) ) .all of them evidence a charge squared radius scaling like the inverse of the squared mass of the system . in comparison , the standard instant and front forms ( i.f .( breit frame ) and f.f .( perp . ) in fig .[ fig1 ] ) do well .lorentz invariance of form factors is often considered as an important criterion for validating an approach . with this respect ,the point form is to be favored as it fulfills this property .it however recently appeared that the approaches that give bad results above are strongly violating another important symmetry : poincar space - time translation invariance .contrary to lorentz invariance , this symmetry can not be checked by looking at form factors in a different frame .instead , one could check relations such as : \rangle = -i\langle \partial^{\mu}\,j^{\nu}(x ) \rangle.\ ] ] this relation can not be verified exactly at the operator level in rqm approaches with a single - particle current but one can require it is verified , at least , at the matrix - element level . with this respect , what is an advantage for the point - form approach becomes a disadvantage as there is no frame where one can minimize the effect of a violation of the above relation ( a factor 2 - 3 for the nucleon and roughly 6 for the pion ) . on the contrary , in the instant and front forms, one can consider different frames .it turns out that the instant- and front - form results for a perpendicular kinematics ( standard ones ) verify the above equality while those for a parallel kinematics do badly , similarly to the point - form case .it thus appears that poincar space - time translation invariance could be more important than the lorentz one and that the intrinsic lorentz covariance of the point - form approach is not so much an advantage than what could be _ a priori _ expected .
noticing that the `` point - form '' approach referred to in many recent works implies physics described on hyperplanes , an approach inspired from dirac s one , which involves a hyperboloid surface , is presented . a few features pertinent to this new approach are emphasized . consequences as for the calculation of form factors are discussed .
the basic machinery for olfaction , our ability to smell , is an array of a few hundred different types of sensory neurons .each of these expresses molecular receptors , that belong to a single type .when this small neuronal assembly is exposed to external stimuli , its cooperative response is capable to detect and recognize a wide variety of _ odorants _ and to measure their concentrations .we use the terminology odorant " to describe any chemically homogenous substance ( ligand ) which elicits a response from the olfactory system .the response of the array of neurons to any particular odorant is determined by the responses of the individual constituent neurons .this response is , however , governed by the extent to which the receptors expressed by the particular neuron bind the odorant , i.e. by the _ affinity _ of the neuron s receptors to the odorant .according to a recently proposed model , these affinities can be viewed as independent random variables , drawn from a single receptor affinity distribution ( rad ) , denoted by .once a set of affinities ( for all odorants and all sensory neurons ) has been generated , the response of the entire sensory assembly to any odorant is determined .this information is transferred from the sensory neurons to the olfactory bulb , onto which the axons of the sensory neurons project .they form synapses on secondary neurons ( mitral and tufted cells ) .this integration of the sensory input , that takes place in the olfactory bulb , forms the first step of the information processing that takes place in the olfactory pathway .interneurons of two major types ( periglomerular and granule cells ) are believed to play a role in computing the pattern transmitted from the olfactory bulb to higher brain centers . in this paperwe evaluate , on the basis of a very simple model , some of the potential computational characteristics of the olfactory bulb , as it performs this initial integration .we hope some of our quantitative results could be biologically relevant . our simple model for the sensory array and a single processing unitis depicted in fig .[ fig1 ] .the model we introduce is , however , interesting also from a mathematical point of view .the problem of linear separability ( * ls * ) of points in dimensional space has received considerable attention since the 19th century . in the mathematics literaturecover studied the problem of * ls * of independent dichotomies using combinatorial methods . in computer science the perceptron , introduced by rosenblatt and analyzed in detail by minsky and papert , gave a major boost to the field of neural networks .more recently , by introducing statistical mechanics techniques gardner extended cover s results to cases where there are correlations between the points that have to be linearly separated .we generalize the problem of separating ( zero - dimensional ) _ points _ , to the separability of ( one - dimensional ) _ strings _ or _ curves _ , embedded in -dimensional space . in the context of our problem the curves that need be separated are parametrized continuously by the odorant concentration . in principle onecan address the separability of curves by placing a discrete set of points on each curve , thereby mapping the problem onto the previously solved one , of separating points .one should note , however , that points that lie on the same curve are not independent ; in fact they are correlated in ways that render the previously developed analytical methods unapplicable .therefore we present an extensive numerical analysis of the capacity of this special neural network , of sensory neurons that provide input to a single processing unit .the capacity we calculate is interpreted as follows .the sensory system is exposed to odorants , _ one at a time_. one of these is the target " ; the aim is to distinguish the target from all the other odorants that form a noisy olfactory background " .the model , based on a single layer perceptron , is introduced and discussed in detail in section 2.1. then we turn to describe the method we have developed in order to determine numerically the capacity . to do this we had to adapt and use several different techniques .one of these , a learning algorithm introduced by nabutovsky and domany , is described in sec 2.2 .this algorithm , like all other perceptron learning rules , finds the separation plane ( if the problem _ is _ * ls * ) ; however , unlike other learning algorithms , it provides a rigorous signal to the fact that a sample of examples is _ not _ * ls*. another technique we had to adapt to our purposes is finite size scaling ( fss ) analysis of the data . the main results are presented in sec 3 as curves of capacity as a function of odorant concentration in the thermodynamic ( ) limit , obtained by extrapolation , using fss , from data obtained at a sequence of values .this large limit is quite natural from both practical and theoretical points of view . in practice ,for of the order of a few hundred , the results can hardly be distinguished numerically from those at the limit . as to the theoretical side ,the situation in this limit is much cleaner and easier to analyze .the final section 4 contains a critical discussion of the results from a biological point of view . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ our central finding is summarized in fig [ fig : alphac ] ; if we fix the range of concentrations in which the system operates , and increase the number of background odorants , we will reach a critical number beyond which the system fails to discriminate the target .this critical number is proportional to the number of sensory neurons , i.e. , and it decreases when the concentration range increases . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _the simple neural assembly that is considered here consists of a single secondary neuron , which receives inputs from an array of units that model the sensory neurons .the single secondary neuron represents a grandmother cell " , whose task is to detect one particular `` target '' odorant , labeled 0 .the sensory scenario we consider allows exposure of the neuronal assembly to a single odorant , which may either be the target odorant or one of background odorants .the odorant provides simultaneous stimuli to the sensory neurons .the aim of the single secondary neuron is to determine whether the odorant that generated the incoming signal from the sensory array is the target odorant 0 or not .we assume that all odorants , background and target , are presented to the sensory array in concentrations that lie within a range we pose the following , well defined quantitative question : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ what is the maximal number of different background odorants that our neuron can distinguish from the target , for any concentration within the prescribed range _ ?_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ to sharpen the question , we put it in a more precise mathematical form .consider background odorants with respective concentrations in the range ( [ eq : range ] ) .odorant is characterized by the affinities of the receptors . according to the rad model, these affinities are selected independently from a distribution [ 1 ] .all our numerical results were obtained using for the form ( note : ) the average and variance of this distribution are given by the distributions suggested in were poisson and binomial . with regard to the computational limitations of our model , the important idea behindthe rad model lies not in the exact form of the distribution , but on the fact that the affinities can be thought of as independent random variables .the main computational features of our model will not be altered as long as the distribution has the following features : it is zero for negative affinities and it has finite first and second moments .we have used since it satisfies the previous constraints and is easier to deal with in analytical calculations . when receptor is exposed to odorant , at concentration ,its response is given by where is a sigmoid shaped function ; we use throughout this paper . the value taken by the affinity sets the particular concentration scaleat which odorant affects the sensory neuron . from this point onwe set in eq .( [ eq : psi ] ) ; this means that the concentrations are measured in inverse units of the parameter the set of values constitute a vector of signals * s* , generated by the entire sensory array , when it is exposed to odorant .the serve as inputs to our secondary neuron , which we model as a linear threshold element or perceptron ; its output signal is given by the simple neural network described above is schematically presented in fig .the sensory neurons are represented by boxes and the secondary neuron by a circle .we require the output of this neuron to differentiate the target odorant from the background , i.e. yield for * any * odorant concentration in the allowed range ( [ eq : range ] ) . to understand the geometrical meaning of this requirement , note that when the concentration of odorant is varied in the allowed range ( [ eq : range ] ) , the corresponding vector * s* traces a _ curve _ ( or _ string _ ) in the -dimensional space of sensory responses .the requirement ( [ eq : smu ] ) means that there exists a hyperplane , such that the entire curve that corresponds to the target odorant lies on one side of it , while the curves that correspond to _ all _ background odorants lie on the other side .this explains our statement , made in the introduction , that the problem we solve deals with the _ linear separability of curves_. we show that a solution to this classification problem can be found , provided .we estimate the critical capacity numerically .this is done by extrapolating results obtained for various values of , using finite size scaling techniques , to the limit .the value of is evaluated as a function of the limiting odorant concentrations . in order to obtain these results using existing methodology, the most natural and straightforward thing to do is to place a discrete set of points on each curve , corresponding to different concentrations , and to require that the points that lie on the curve of the target odorant are linearly separable from the points that represent the background .that is , equations ( [ eq : smu ] ) become this raises the technical question of how many ( discrete ) representatives of the same odorant should be included in the learning set .we show below , that while the critical number of odorants , scales linearly with , the number of representatives of a single odorant , , has to grow at least as fast as .this ensures that increasing further does not change the results of the calculation ( e.g. the value of ) and hence the discrete points indeed represent correctly the continuous curves on which they lie .our problem has been turned into one of learning patterns " , that constitute our training set . for technical reasons it is convenient to introduce and work with normalized patterns , with running over the discrete concentrations and over all odorants .note that we also multiplied each pattern * s* by its desired output , ; after this change of representation the condition of linear separability ( [ eq : smuzeta ] ) becomes the question posed above , whether the target odorant can or can not be distinguished from the background , has been reduced to the following one : is there a set of weights , for which all inequalities ( [ eq : pos ] ) are satisfied ?this problem is of the type studied by rosenblatt , and is an example of classification by a single layer perceptron .a solution exists if one can find a weight vector ( that parametrizes the perceptron ) such that for all the patterns in the training set the field " i.e. the projection of the weight vector onto all patterns is positive .we wish to determine the size of the training set , i.e. the number of background odorants , for which a solution can be found .this is done by executing a search for a solution by means of a _ learning algorithm_. there are several learning algorithms ( e.g. rosenblatt , abbott and kepler ) in the literature ; all are guaranteed to find such a weight vector , in a finite number of steps , _ provided a solution exists_. if , however , the problem is _ not _ * ls * and a solution _ does not _ exist , most learning algorithms will just run ad infinitum .an exception to this is the algorithm of nabutovsky and domany ( nd ) which detects , in finite time , that a problem is non - learnable .this is a batch perceptron learning algorithm , presenting sequentially the entire training set in one sweep " and repeating the process until either a solution is found or non - learnability is established .we found that this algorithm is efficient and convenient to use ( see for other algorithms that detect non-*ls * problems ) .. nd introduced a parameter which they called _ despair _ , which is calculated on line " in the course of the learning process . is bounded if the training set is * ls*. since the nd algorithm can be shown to either find a solution , or transgress the bound for in a finite number of learning iterations , effectively signals if the learning set fails to be linearly separable .the theorem they proved can be easily extended to the distribution of examples in our problem .we introduced a halting criterion , which is probably more stringent than necessary , since no attempt has been made to determine an optimal lower bound .in figures [ fig : dsino]a and [ fig : dsino]b typical evolutions of the despair are shown for an * ls * case and for a non-*ls * case , respectively .the behavior of is strikingly different in the two cases , showing that indeed is a good indicator of learnability . in the learnable cases grows linearly with the number of learning sweeps until a solution is found ( and the curves terminate ) . in the non-*ls* cases grows exponentially with the number of sweeps and would continue to grow ; the process is halted when it s value exceeds a known bound , that must be satisfied if the problem is * ls*. we now describe the nd algorithm used in the simulations .the patterns of the learning set are presented one at a time ( one cycle constitutes a sweep ) .nd have shown that for binary valued patterns ( ) , i.e. patterns on vertices of a unit hypercube , an upper bound exists iff the training set is * ls*. on the other hand the dynamics is shown to take beyond that bound in a finite ( linear in ) number of iterations unless a solution exists and the algorithm halts . initialize the process with go to the next example .if it is correctly classified , do nothing to the current weight vector and go to the next example .once a misclassified example is found , update the weight vector as well as the parameter , according to is not just a learning rate parameter but an effective modulation function , chosen in order to maximize the increase of the despair as the learning dynamics halts if all patterns are correctly classified or alternatively , if the value of exceeds an upper bound , given by this is guaranteed to happen in at most [upp] steps .since there is a large number of parameters that are to be varied , we present first a detailed description of the manner in which we deal with every one of them .there are two random elements in our studies .the first is in the selection of concentrations for each odorant , within the range ( [ eq : range ] ) ; the second is the choice of , the affinity of receptor to odorant , selected at random from the distribution of eq .( [ eq : psi ] ) . for every choice of the remaining variables we generate an ensemble of experiments and average the object we are measuring over these two random elements .we select times the set of affinities and for each of these perform times the random selection of concentrations .the object we wish to estimate numerically is the probability , that the curves described in the introduction are * ls*. to this end we place points on each curve and measure the corresponding probability . as we will see , for large enough values , , this probability _ becomes independent of m _ ; beyond the set of discrete points represents the corresponding curves faithfully and hence the limiting value is our estimate for .finally , we are interested in this function in the large limit , i.e. when and , while is fixed .this limit is obtained by extrapolating our finite results , using finite size scaling methods .our first task is to determine how scales with ; that is , how dense a set of concentrations is to be used so that discrete points represent accurately the continuous curves of eq .( [ eq : si ] ) ?we choose values for ( number of receptor cells ) , ( number of odorants ) and ( limiting concentrations ) .we also set some value for , the number of concentrations by which every odorant is represented ( will will be varied ) .we proceeded according to the following steps : 1 .draw from the distribution a set of affinities for all receptors and odorants .2 . generate for each odorant concentration values , from a uniform distribution in the allowed range and construct the set of normalized patterns .3 . run the nd learning algorithm until it stops ; register whether the set was * ls * or not .steps 2,3 are repeated times for each set of affinities ; the whole process 1 - 3 is repeated for different sets of affinities .we used ; increasing it further made no difference .with such a value of the results did not depend on ; having tried we used in our simulations . at this pointwe have experiments , out of which a fraction of cases were linearly separable . keeping fixed , we increase and repeat the entire process , obtaining the probability functions , that are plotted in fig . [fig : mnvsp ] versus .clearly the curves saturate when . from this point on we have fixed the value of at . this numerical result can be estimated by using the analysis of gardner and derrida for the capacity of biased patterns , using for the magnetization " the value .this gives , in addition to the leading behavior , logarithmic corrections as well .we can not rule out the possibility of such logarithmic corrections to the scaling we found here . in all our experiments we fixed the value of andhence the dependence of the probability on this variable has been suppressed . for various values of , and calculate in the manner described above . keeping and , we increase .for we have and the probability of * ls * decreases as increases .we stop increasing when becomes smaller than some the variation of vs is presented , for three values of and four values of , in fig .[ fig : gfh ] .the results presented in these figures are discussed in the next subsection .we should mention here that for large we used a heuristic modification of the nd halting criterion , to label a problem as non-*ls*. typical evolutions of the despair parameter are shown in figures [ fig : dsino ] .each curve represents the history for a single learning set .notice the huge difference in scales for the learnable and the unlearnable cases .the wide separation in final values of suggests that a more practical , e.g. smaller , upper bound be used . for ( the largest value treated here ) we used a different halting criterion in order to escape from the need to reach an exponentialy high upper bound . after a small number of successful trial runs ( that did produce linear separabitiy ) we identified the highest value of the despair that was reached for a learnable set .this value was used to define our new heuristic halting criterion , .as expected , for small the probability for linear separability is close to 1 , and it decreases as increases .the curves obtained for fixed become sharper as increases .note that curves obtained for different values cross at approximately the same value of .similar behavior of the corresponding probability functions has been observed for random uncorrelated patterns .notice , however , that the crossing point is at some probability .similar curves , obtained for other architectures , such as the parity and commitee machines crossed at .if there is a sharp transition in the thermodynamic limit ( ) , these curves should approach a step - function , with that is , for below a certain a learning set will be * ls * with probability one and conversely , it will be * ls * with probability zero for the manner in which such a step function is approached as can be described by a finite size scaling analysis ( e.g. ) . for each value of ( keeping fixed ) we tried a simple rescaling of the variable , with two adjustable parameters , and ; for the proper choice of and we expect _ data collapse _ ; that is , curves obtained for different values of are expected to fall onto a single function , provided is plotted versus the scaled variable . as can be seen on figures [ fig : scalh]a , b and c ,this expectation is borne out ; the evidently good data colapse indeed substantiates the idea of a sharp transition at . as increases , the function becomes increasingly sharper ; its width near decreases at a rate governed by the exponent . finally , we present in figure [ fig : alphac ] the behavior of as a function of ( for fixed ) . as increases , separation of the curves becomes an increasingly difficult task and hence decreases .we find that it saturates at a low value close to , which is exactly the cover result .this interesting point is explained in the appendix .note that even though we deal here with linear separability of _ curves _ , which one would expect to be a more difficult task than separating points , we found that our exceeds the value derived for points , .the reason is that this is the critical capacity for separating _random , independent _ points ; the curves we are trying to separate are _ not independent of each other_. in fact by construction we have for all the background odorants ; hence all these curves lie on one side of an entire family of planes .the target odorant , which also satisfies , should lie on the other side of the separating plane .the curve is , in effect , a _ phase boundary _ ;on one side we have a phase " in which the problem is * ls * , while on the other ( high ) region it is not .we present now a brief description of the manner in which linear separability breaks down as we cross this phase boundary by increasing at fixed .the manner in which * ls * breaks down as increases beyond the phase boundary is nicely illustrated by the set of figures [ fig : ls ] and [ fig : nls ] .consider the curves , in an -dimensional space , which represent the odorants , in a linearly separable case .we present in figure [ fig : ls](a ) a projection of these curves onto a randomly chosen plane .one of these ( indicated by an arrow ) is the target odorant ; it seems to be entangled with the other curves .the point at which all curves seem to converge corresponds to the maximal concentration .the purpose of the learning dynamics is to find a particular direction * w * , along which one is able to separate the target curve from the others .denote by the hyperplane that passes through the origin and is _ perpendicular _ to * w * ; this is the linear manifold that separates the target from all the background curves .select now any plane , that contains * w * , and project all curves onto ; this produces fig .[ fig : ls](b ) .the horizontal dotted line shown here is the intersection of the hyperplane with the plane .the projected background odorant curves lie on one side of this line and the target on the other .the situation depicted here is * ls*. consider now what happens when we turn the problem into non - * ls * by increasing beyond the phase boundary .as we increase the maximal concentration , the target odorant s curve penetrates to the wrong " side of the hyperplane . a picture of this situation is shown in fig .[ fig : nls](a ) .this is a non * ls * problem - which means that no matter how long we run our learning algorithm , we will never find a hyperplane that separates the target from all the background .if nevertheless we keep running our learning algorithm , the direction of our candidate for * w * will keep changing as we learn " ,but since the critical capacity curve of figure [ fig : alphac ] has been crossed , no amount of further learning will produce a separating plane .the density of points near the high concentration limit is much larger than for low concentrations .hence further learning will perhaps be able to separate the target from the background at high concentrations - but then separability breaks down at low concentrations ( see fig .[ fig : nls](b ) ) .in the olfactory bulb of most vertebrates , each secondary neuron ( mitral or tufted cell ) receives input from only one glomerulus , which in turn is innervated , in all likelihood , by axons stemming from olfactory epithelial sensory cells that all express the same olfactory receptor protein .thus , the grandmother cell modeled here may not simply represent a mitral or tufted cell . however , when the network of periglomerular and granule cells ( interneurons ) is taken into account , then it is fair to state that each mitral cell receives ( indirect ) input from a large number of different olfactory receptor types .thus , the present analysis may be relevant to the kind of neuronal processing that takes place in the first neuronal relay station of the olfactory pathway , the olfactory bulb .alternatively , it may represent , in abstract fashion , information processing that takes place both in the olfactory bulb and at higher olfactory central nervous system centers .previously , several studies have been published that analyze neuronal networks for the olfactory system . however , none of these was based on a quantitative model for the affinity relationships within the entire olfactory receptor repertoire . here , we use the receptor affinity distribution ( rad ) model , which was developed , based on general biochemical considerations , for receptor repertoires , including that of olfactory receptors .the power of this approach is in utilizing a global knowledge about the repertoire to analyze the fidelity of discrimination among odorants .it has been pointed out in the past , that the rad model may be used to analyze the signal to noise ratio in systems in which specific binding to a receptor has to be distinguished from the background of numerous other receptors which constitute `` non - specific binding '' . here, we apply a similar concept to an analysis of signal to noise discrimination in the case of a neuronal network whose input stems from a receptor repertoire .the results presented here suggest that for a fixed number of background odorants there is a maximal odorant concentration beyond which odorant discrimination becomes impossible .this is not surprising , since olfactory receptors are saturable , and at very high concentrations weak affinity receptors as well as high affinity ones will generate comparable signals .however , it is noteworthy that despite the fact that information capacity for odorant discrimination rapidly declines as odorant concentration goes up , the presently analyzed network is still capable of discrimination even at concentrations for which is of the order of a few hundred ( where is the average affinity ) .the model network consists of sensory neurons , each of which is characterized by a set of affinities to a number of odorants .when any particular odorant , , is present , sensory neuron produces a ( nonlinear ) response , .these responses constitute the inputs to a single processing unit ( secondary neuron ) , which performs weighted summation of all the inputs .the secondary neuron s output is the sign of this weighted sum .the aim of this single processing unit is to identify _ one single odorant _ separate it from all the others that may be sensed by the system .this secondary neuron plays the role of a `` grandmother cell '' for a particular target odorant .an assemply of such secondary neurons may constitute , together with the sensory neurons , a system that is able to clearly identify the presence of target odorants , from a background of odorants .we posed a well defined quantitative question : given that each odorant may appear with a concentration that lies in a certain range , , what is the maximal number of background odorants , from which a single target can be separated with probability 1 ?the answer is summarized in fig .5 , where , the critical capacity , is plotted vs. .the result is obtained in the limit of large ( i.e. many sensory neurons - in fact , for this result should already give excellent precision ) . for a dynamic range of of about 100we find .that is , for say sensory neurons we can distinguish the target from about 750 background odorants .hence if we assemble 750 odorants and appoint a grandmother cell for each , we will be able to identify them one by one . in order to get this quantitative answer we had to generalize an old problem , of _ linear separability _ of points on an dimensional hypersphere , to the new problem of linerly separating _ curves _ that lie on the same hypershere .we have shown that in order to represent a curve by discrete points that lie on it , we have to place points on each curve .the results were obtained by a perceptron learning algorithm that signals when a problem is _ unlearnable _ , i.e. non - linearly - separable .the behavior of the phase boundary for large concentrations ( figure 5 ) is quite surprising since the network may be expected to enter a totally confused state due to the saturation of the nonlinear sensory neurons .this could be expected to lead instead to the cover result is recovered in the high concentration regime can be in fact be understood by the following argument .we first calculate the probability that a sensory unit gives a response to the presentation of an odorant in the range ( 1 ) , by where the average is taken over possible concentrations uniformly distributed in range ( 1 ) and according to the rad model , over the affinities , of equation ( 2 ) . is given by equation ( 6 ) .the integrals lead to where is the complementary error function this probability has one peak which sharpens and moves to higher values of as grows .however at the very ends of the interval , the probability is zero . that for every is the source of the surprise .the peak which concentrates all the probability , gets arbitrarily close to , as the concentration increases , but never makes it to the extreme of the interval .in fact .therefore the components of the vectors will be with overwhelming probability at the peak position , which can be written as with all but strictly positive .neglecting second order terms in the normalized patterns will then be : therefore the vectors are unbiasedly distributed around .we are taken back to the original cover - gardner problem of separating unbiased patterns with a hyperplane and the result is no longer a surprise .this argument does nt deal with the asymptotic behavior of the capacity in the presence of any kind of noise . in that casethe naive expectations that for are probably borne out .* acknowledgements * we thank ido kanter for most useful discussions .the research of ed was supported by the germany - israel science foundation ( gif ) , the minerva foundation and the us - israel binational science foundation ( bsf ) .the work reported here was initiated during visits of nc to the weizmann institute , that were supported by grants from the so paulo society of friends of the weizmann institute and by the gorodesky foundation .jept s research was supported by a graduate fellowship of the fundao de amparo pesquisa do estado de so paulo ( fapesp ) .nc received partial support from the conselho nacional de desenvolvimento cientfico e tecnolgico ( cnpq ) d. lancet , e. sadovsky and e. seidman , proc .usa , * 90 * 3715 ( 1993 ) l schlfli , theorie der vielfachen kontinuitt , gesammelte matematische abhandlungen , ed .steiner - schlfli - komittee basel , birkhuser p171 ( 1852 ) _ apud _w kinzel _ phil . mag ._ * b 77 * , 1455 ( 1998 ) e gardner , j. phys .a * 21 * , 257 ( 1988 ) e. gardner and b. derrida , j. phys . a * 21 * , 271 ( 1988 ) d. nabutovsky and e. domany , neural computation , * 3 * , 604 ( 1991 ) e.g. v. privman , ed ._ finite - size scaling and numerical simulations of statistical systems _ ,world scientific , singapore , 1990 w. nadler and w. fink , phys .* 78 * , 555 ( 1997 ) f. rosenblatt , _ principles of neurodynamics _ , spartan books , new york ( 1962 ) l. f. abbott and t. b. kepler j. phys .a. * 22 * , l711 ( 1989 ) t. cover , ieee tran .comput , * 14 * , 326 ( 1965 ) minsky and papert _ perceptrons _ mit press , cambridge , ma ( 1969 ) d. lancet , a horovitz and e katchalski - katzir molecular recognition in biology : models for analysis of protein - ligand interactions .behr , j.p . , ed . , john wiley and sons ltd .25 - 71 ( 1994 ) w. freeman , http://sulcus.berkeley.edu/flm/ms/wjfmm.html j. hopfield _ proceedings of the nat .* 88 * 6462 ( 1991 ) m. a. wilson and j. m. bower _j. neurophysiol _ * 67 * 981 ( 1992 ) z. li an j. hopfield _ biol. cybern _ * 61 * 379 ( 1989 ) z. li _ biol . cybern _ * 62 * 349 ( 1990 ) and modeling the sensory computations of the olfactory bulb published in models of neural networks vol . 2 , eds .j. l. van hemmen , e. domany , and k. schulten , springer - verlag new york , ( 1995 ) z. li and j. hertz _ network : computation in neural systems _ * 11 . * 83 ( 2000 )
we introduce and study an artificial neural network , inspired by the probabilistic receptor affinity distribution model of olfaction . our system consists on sensory neurons whose outputs converge on a single processing linear threshold element . the system s aim is to model discrimination of a single target odorant from a large number of background odorants , within a range of odorant concentrations . we show that this is possible provided does not exceed a critical value , and calculate the critical capacity . the critical capacity depends on the range of concentrations in which the discrimination is to be accomplished . if the olfactory bulb may be thought of as a collection of such processing elements , each responsible for the discrimination of a single odorant , our study provides a quantitative analysis of the potential computational properties of the olfactory bulb . the mathematical formulation of the problem we consider is one of determining the capacity for linear separability of continuous curves , embedded in a large dimensional space . this is accomplished here by a numerical study , using a method that signals whether the discrimination task is realizable or not , together with a finite size scaling analysis .
it is widely known that the first - generation laser interferometric gw detectors suffer from the great amount of noises of various nature .the major limiting factors at low and middle frequencies are seismic noise and thermal noise which can be referred to the class of displacement noise of the test masses . at high frequencies photonshot noise is dominant . in the standard quantumlimited ( sql ) second - generation detectors , being under preparation , the cause of sql is the fluctuating force of radiation pressure in the laser beam ( back - action noise ) pushing the interferometer mirrors in a random manner . thus standard quantum limitationalso aries due to displacement noise .each method of suppressing or eliminating of displacement noise proposed up - to - date is only suited for control of only one kind of noise .for instance , active antiseismic isolation will definitely suppress seismic noise but is helpless against thermal noise or quantum radiation pressure . from the other hand , quantum - non - demolition ( qnd ) schemes of measurements are able canceling back - action noise but are certainly not suited for dealing with seismic or thermal noise .however , recently there has been proposed a revolutionary new method of displacement noise cancelation which simultaneously eliminates the information about all external fluctuating forces but leaves certain amount of information about the gravitational waves .the major idea is to construct such an interferometer that would respond differently to the motion of the test masses and the gws. then the proper linear combination of the interferometer responses will cancel the fluctuations of the test masses leaving a non - vanishing information about the gws .one may find at least two different methods proposed up - to - date . the first one , described in a series of papers by s. kawamura _ et al . , bears on the distributed nature of the gws. this can be best explained from the viewpoint of a local observer ( or the local lorentz gauge ) .in such a reference frame interaction of the gw with a laser interferometer adds up to two effects .the first one is the motion of the test masses in the gw tidal force - field . in this aspect gwsare indistinguishable from any non - gw forces since both are sensed by the light wave only at the moments of reflection from the test masses. we will refer to this as the _ localized _ nature of the forces acting on the test masses .if the linear scale of a gw detector is much smaller than the gravitational wavelength ( the so - called long - wave approximation ) then the effect of the gw force - field is of the order of , where is the absolute value of the gw amplitude .relative motion of the test masses , separated by a distance , in any force field can not be sensed by one of them faster than , thus resulting in the rise of terms of the order of ] order in long - wave approximation .ultimately , from the viewpoint of local observer displacement - noise - free interferometry implies the cancelation of localized effects ( gw and non - gw forces ) leaving a non - vanishing information about the distributed effect ( the direct coupling of the gw to light ) .it was pointed in refs . that in order the gw detector to be a truly displacement - noise - free interferometer it should be also free from optical laser noise since the latter is indistinguishable from laser displacement noise. their sum is usually called laser phase noise .cancelation of laser phase noise in interferometric experiments is usually achieved by implementing the differential ( balanced ) schemes of measurements : in conventional interferometers ( such as ligo ) it is the michelson topology and in dfis proposed in ref . it is the mach - zehnder ( mz ) topology .although dfi detectors that bear on distributed nature of gws allow complete elimination of displacement noise , the `` payment '' for such a gain is the significant weakening of the gw response at low frequencies .this fact directly follows from the mechanism of noise cancelation : together with displacement noise we also cancel gw terms of the and orders , leaving only the ] order at low frequencies , as described above .if and then such a displacement - noise - free gw response will be proportional to in spectral domain .in addition , this response can be further amplified with a fp cavity , for instance. then one should expect the rise of the resonant multiplier , where is the cavity half - bandwidth .ultimately , the strongest dfi gw response allowed by the first principles should be proportional to in spectral domain .the optical setup satisfying the requirements of practical reasonableness and maximum completeness of displacement noise elimination , also restricted by condition of the strongest response derived above , does not , however , immediately follows from some basic principles .it is a matter of search at large , limited by practically reasonable configurations and assumptions .for instance , in this paper introducing several model assumptions we propose a pair of symmetrically positioned michelson interferometers with fabry - perot cavities inserted into each arm of both interferometers , as a dfi gw detector with a reasonably simple optical setup and the strongest possible response .first , consider a conventional ligo topology ( without power- and signal - recycling mirrors ) .let the end mirrors be partially transmittible .in this case an interferometer will produce three response signals : the reflected ( laser - noise - free ) one in conventional dark park and the transmitted ones in the arms .it is worth noting here that certain care is required when calculating the responses : each response signal should be evaluated in the proper reference frame of the detector that detects the corresponding signal .otherwise , unmeasurable quantities may arise in the analysis .an experimentalist is able to measure the quadrature components of the interferometer responses and record them for further processing . from the set of transmitted signalsquadratures it is possible , in principle , to construct a laser - noise - free linear combination .therefore , at this stage we may obtain two signals ( quadratures ) free from laser phase noise : the reflected one and the combined transmitted one .due to the sophisticated frequency dependence of the fp cavities responses these two signals can be combined in turn to eliminate one of four differential mechanical degrees of freedoms associated with the test masses ( beamsplitter , two input mirrors , two end mirrors and two end detectors ) .at this stage we introduce some restrictions into the optical scheme : ( i ) end detectors and end mirrors , and ( ii ) input mirrors and beamsplitter are assumed to be rigidly connected .the practical legitimacy of these assumptions seems questionable and is open for criticism , although , no basic principles forbid such a gedanken ( thought ) experiment . under these restrictionswe are left with only two differential degrees of freedom , one of which can be eliminated in a combination of two laser - noise - free signals .we choose to cancel displacement noise associated with the differential motion of the end platforms . ultimately , due to the symmetry of plane gw wavefront we are able to cancel the fluctuations of the central platform ( with beamsplitter and input mirrors ) if the similar interferometer is positioned symmetrically ( see fig .[ pic_double_michelson_fp ] in the text ) and both interferometers have common central platform ( this is another gedanken - experiment - supposion ) . in this casethe single - interferometer partial dfi responses will have gw term of the same sign but the fluctuations of the central platform will enter with different signs .adding two single - interferometer responses we cancel displacement noise of the latter . then the obtained laser- and displacement - noise - free gw response signal turns out to be proportional to amplified with the cavity resonant gain .let us first remind briefly the `` tools '' necessary for our further considerations . as explained in ref . , to obtain physically reasonable , i.e. measurable , quantities , calculations should be performed in the proper reference frames of the devices that produce the corresponding experimentally observed quantities .since they are usually subjected to external fluctuative forces , they commit random motions and thus we have to deal with their proper non - inertial reference frames . corresponding tools for solving certain boundary electrodynamical problems in such reference frames have been developed in ref . and utilized in ref . . therefore , in this paper we will not retell the content of these works in detail but will write several useful formulas in this section .in particular , the space - time of an observer having non - geodesic acceleration along the -axis and falling in the weak , plane , +-polarized gw propagating along the -axis takes the following form : + dx^2+dy^2+dz^2\nonumber\\ & + \,\frac{1}{2}\,\frac{x^2-y^2}{c^2}\,\ddot{h}(t - z / c)\ , ( c\,dt - dz)^2 .\label{eq_metric_tensor}\end{aligned}\ ] ] conditions of linearized theory and are assumed to be satisfied for all reasonable and . without the loss of generality we may assume when considering one - dimensional problems .consider a test mass which in a state of rest ( no fluctuations and no gw ) has the coordinate with respect to the observer ( also in a state of rest ) . if the test mass is subjected to some external fluactuative force which moves it according to the motion law as seen from the laboratory frame ( for instance , the one associated with the earth surface ) , then its motion law with respect to the observer in space - time ( [ eq_metric_tensor ] ) is : where is the observer s law of motion as seen from the same laboratory frame .it is assumed here that .in fact , eq . ( [ eq_law_of_motion_2 ] ) is the coordinate transformation from laboratory frame to the observer s frame . in spectral domain which will be widely used it is also important to take into account the effects imposed by the gw and acceleration fields on the electromagnetic waves propagating in space - time ( [ eq_metric_tensor ] ) .it has been derived in refs . that the waves propagating in the positive and negative directions of the -axis can be described by the following vector potentials : ^{-i(\omega_0t\mp k_0x)}\nonumber\\ & + a_\pm(x , t)e^{-i(\omega_0t\mp k_0x)},\label{eq_emw}\end{aligned}\ ] ] where ,\end{aligned}\ ] ] and .\ ] ] both and describe the distributed effects : is responsible for the direct coupling between the gw and the electromagnetic wave and describe the redshift imposed on the electromagnetic wave by the noninertiality of the reference frame .weak fields describe electromagnetic fluctuations ( classical or quantum ) .in this paper we will also deal with the motions along the -axis . in this caseall the formulas remain the same but the gw function should be taken with the opposite sign ( this follows from the metric ( [ eq_metric_tensor ] ) ) .let us consider the operation of a single fabry - perot cavity as a gw detector ( see fig .[ pic_fp_cavity ] ) . and . cavity is pumped by laser l through mirror with the input wave and through mirror with the vacuum - state wave .optical field inside the cavity is represented as a sum of the wave , running in the positive direction of the -axis , and the wave , running in the opposite direction .the wave reflected from the cavity is measured with the ( amplitude or balanced homodyne ) detector with the reference oscillation ( if necessary ) produced by laser l. transmitted wave is measured with the ( amplitude or balanced homodyne ) detector with the reference oscillation ( if necessary ) produced by some local source . ]cavity is assembled of two movable mirrors and , both lossless and having the amplitude transmission coefficient , .we put distance between the mirrors in the absence of the gravitational wave and optical radiation to be equal to .the incident gw is assumed to be weak , plane , +-polarized and propagating along the -axis . then without the loss of generalitywe assume the cavity to be lying in the plane along one of the gw principal axes , coinciding with the -axis .cavity is pumped by laser l whose center of mass commits a fluctuative motion along the -axis as seen from the laboratory frame .both mirrors and have associated displacement noise and .finally , both the detectors and that measure the reflected wave and transmitted wave correspondingly fluctuate as and .to evaluate the response signals of the cavity we should perform the calculations in the proper reference frames of detectors and , as pointed above : the reflected signal is measured with the first one and the transmitted signal is measured with the latter one . herewe will derive the expression for the wave reflected from the cavity .since it is detected by we will perform the calculation its proper reference frame .this section mostly repeats the similar considerations in ref . , therefore we will proceed without detailed comments .the origin of the coordinate system is assumed to be set up at the center of mass of .therefore , according to eqs .( [ eq_law_of_motion_1],[eq_law_of_motion_2 ] ) test masses of the system will have the following motion laws with respect to : in last two equations we neglected the small distance ( compared to the cavity length ) between the optical bench where laser and detector are located and the input mirror .let the cavity be pumped by laser l through the input mirror with the input wave \nonumber\\ & \quad\times\exp\biggl\{-i\omega_0\left[t- \frac{x - x_\textrm{l}(t)}{c}\right]\biggr\}\nonumber\\ & \quad+a_{\textrm{in}}(x , t)e^{-i(\omega_0t - k_0x)},\label{eq_input_wave_r}\end{aligned}\ ] ] strictly speaking , the argument of here should depend on itself like , but since is already the quantity of the 1st order of smallness we can neglect such dependence .the vacuum - state pump through mirror can be written as : }. \label{eq_vacuum_wave_r}\ ] ] here is the `` weak '' field describing optical laser noise of the pump wave and is the `` weak '' field describing vacuum noise in the opposite input port .remind , that both the laser and mirror are located at , where , thus input wave does not acquire distributed phase shift when it reaches mirror .it is convenient to represent the optical field inside the cavity as a sum of two waves , and , running in the opposite directions : e^{-i(\omega_0t\mp k_0x)}\nonumber\\ & + a_{\pm}(x , t)e^{-i(\omega_0t\mp k_0x)}.\label{eq_inside_wave_r}\end{aligned}\ ] ] here describe the phase shift accumulated by the light wave while circulating inside the cavity .output wave reflected from the cavity is : e^{-i(\omega_0t+k_0x)}\nonumber\\ & + a^{\textrm{r}}_{\textrm{out}}(x , t)e^{-i(\omega_0t+k_0x ) } , \label{eq_reflected_wave}\end{aligned}\ ] ] if detector is a quadratic amplitude detector then it measures the quantity proportional to ( neglecting very small terms of the order of ) .if detector is a balanced homodyne detector then it measures the quadratures of . in this casethe reference oscillation can be produced by laser l. we will call the reflected signal below . to obtain the reflected signal we substitute fields ( [ eq_input_wave_r ] [ eq_reflected_wave ] ) into the set of boundary conditions ( conditions of the electric field continuity along the surfaces of the mirrors ) : and solve the system with the method of successive approximations ( see ref .the required solution of the 1st order in spectral domain is : the following notations have been introduced : having the following physical meaning : describes the resonant amplification of the input amplitude inside the cavity , describes the frequency - dependent resonant amplification of the variation of the circulating light wave , and are the generalized coefficients of reflection ( from a fp cavity ) and transmission ( through a fp cavity ) , is the mean amplitude of the optical wave inside the cavity running in the negative direction of the -axis, is the mean amplitude of the wave reflected from the cavity and is the response to gw after a single round trip of light inside the cavity it is also very useful to analyze the physical meaning of each summand in formula ( [ eq_reflected_signal_fp ] ) : 1 . .this term states that the optical laser noise is indistinguishable from laser displacement noise so both always come together and their sum is usually called laser phase noise . in spectral domainreflected wave obviously contains laser phase noise multiplied by the generalized coefficient of reflection . is the vacuum noise from the opposite input port which is transmitted through the cavity and comes with the corresponding coefficient of transmission in the reflected signal ./\mathcal{t}^2_{\omega_0+\omega} ] describes laser noise transmitted through the cavity .the summand with means that the initial phase shift due to laser displacement is additionally redshifted with the gw and the motion of observer ( detector ) , because gw and acceleration fields change the rate of laser clock with respect to detector clock . it should be also noted that even the amplitude detector will be susceptible to ( in time domain ) , since phase modulation is transformed into amplitude modulation in a fp cavity . is the vacuum noise reflected from the cavity into the transmitted port .^{i\omega\tau}/ { \mathcal{t}^2_{\omega_0+\omega}}$ ] describes the total variation of the phase accumulated inside the cavity . accounts for the displacement of the receiver .if detector is the amplitude one , this term becomes unmeasurable .the major disadvantage of a single cavity - based gw detector is the significant level of laser noise which dominates over other noises in practice . to cancel laser noiseone should implement a balanced optical setup , for instance , a michelson interferometer tuned to dark - port regime .let us consider a michelson - type interferometer with fp cavities in its arms ( see fig .[ pic_michelson_fp ] ) . and are inserted into michelson interferometer horizontal and vertical arms correspondingly .interferometer is pumped by laser l through beamsplitter bs with the input wave .beamsplitter produces to waves and which pump horizontal and vertical cavities respectively .interferometer is tuned to the dark - port regime so that both reflected waves and destructively interfere at the beamsplitter and all mean power returns towards laser l ( not shown in the fig . ) .the signal part containing the accumulated phase shift penetrates into the dark port and is incident on detector which operates as a balanced homodyne detector with the reference oscillation produced by laser l. the dark port of detector also produces the vacuum pump .transmitted waves and are detected with detectors and correspondingly which may operate as amplitude or balanced homodyne detectors . in the latter case reference oscillations are produced by some local sources .both detector ports also produce vacuum pumps and . ]laser l which randomly moves along the -axis as pumps the interferometer with the input wave . upon arrival to 50/50-beamsplitter bs which may fluctuate along the - and -axes as and respectively ,the input wave is splitted into two waves and which pump horizontal and vertical arms respectively .fabry - perot cavity in the horizontal arm is assembled of two mirrors and which fluctuate along the -axis as and .the similar cavity in the vertical arm is assembled of mirrors and which fluctuate along the -axis as and .both the cavities may produce reflected and transmitted waves . reflected waves and return towards the beamsplitter and interfere .assume the interferometer is tuned to dark port regime .this means that the reflected waves interfere destructively and the mean optical power returns towards the laser .however , the weak time - dependent ( signal ) part penetrates into the dark port and falls on detector which fluctuates along the -axis as .let the latter one operate as balanced homodyne detector with the reference oscillation produced by laser l. dark port of also produces the vacuum pump .transmitted waves and are measured with the corresponding detectors : in horizontal arm it is and in vertical arm it is moving randomly as and correspondingly .both detectors may operate either as amplitude or homodyne ones . in the latter case reference oscillationsshould be produced by some local sources ( see below ) .both detector ports also produce vacuum pumps and .let us now write the equations for input and output waves in an interferometer .we will not write the expressions for the waves explicitly ; one can obtain them straightforwardly making obvious changes in the formulas from the previous section . at beamsplitterthe relation between the input waves is as following : the relation between the reflected waves is : at this stage we do not define the reference frame , therefore , fields and coordinates of the test masses should be specified explicitly for this or that frame .these equations can be solved straightforwardly . however, we do not need to do this since we already know the solution for a single cavity ( [ eq_reflected_signal_fp ] ) .first , we need to write explicitly the expressions for weak reflected fields and .to do this we use the first pair of bs boundary conditions to obtain in spectral domain : now let us specify the reference frame . since both reflected waves will ultimately end up at detector , it is necessary to work in its proper frame .another way is to use the laboratory frame which implementation is justified with the round - trip situation : laser l and detector can be approximately considered as located at the same spatial position . in any case ,substitution eqs .( [ eq_mean_amplitudes ] [ eq_input_wave_vertical ] ) into formula ( [ eq_reflected_signal_fp ] ) we obtain : + \mathcal{t}b_{\textrm{vac}}\nonumber\\ & \quad+tb_{-0}2ik_0\ , \frac{(\xi_{b_2}-\xi^{\textrm{r.t.}}_{\textrm{gw}})e^{i\omega\tau}-\xi_{a_2 } } { \mathcal{t}^2_{\omega_0+\omega}}+b^{\textrm{r}}_{\textrm{out}0}2ik_0\xi_{a_2}.\end{aligned}\ ] ] here we assumed that the +-polarized gw is perfectly aligned along the interferometer arms .we also neglected the term proportional to since the signal wave does not include `` strong '' mean component .for simplicity let us assume that both the cavities have equal detunings and bandwidths .this results in and .the boundary condition for reflected waves dictates that : substituting here the obtained expressions for and we obtain : (\xi_{a_1}-\eta_{a_2})\nonumber\\ & \quad+\frac{1}{\sqrt{2}}\bigl[a^{\textrm{r}}_{\textrm{out}0}+ \mathcal{r}a_{\textrm{l}0}\bigr]ik_0(\xi_{\textrm{bs}}-\eta_{\textrm{bs } } ) .\label{eq_reflected_signal_michelson}\end{aligned}\ ] ] one can note that the obtained signal is very similar to the one of a single cavity .namely , the following degrees of freedom are equivalent from this viewpoint : , , .the latter relation means that the beamsplitter effectively cuts all laser phase noise introducing , however , its own displacement noise . in an experiment homodyne detector measures the quadrature components of .however , keeping this in mind , we will deal with the field amplitude itself , since calculations with quadratures result in very cumbersome formulas , while not changing the physical meaning of the ultimate results .now let us derive the transmitted signals .since they are detected by two different devices , each of the signals should be calculated in the proper reference frame of the corresponding detector .keeping in mind that and should be explicitly specified for each of these reference frame , we substitute eqs .( [ eq_mean_amplitudes ] [ eq_input_wave_vertical ] ) into formula ( [ eq_transmitted_signal_fp ] ) and obtain : \nonumber\\ & \quad+r^2te^{3i\omega_0\tau}a_{+0}2ik_0\ , \frac{(\xi_{b_1}+\xi^{\textrm{r.t.}}_{\textrm{gw}})e^{i\omega\tau}-\xi_{a_1 } } { \mathcal{t}^2_{\omega_0+\omega}}\,e^{i\omega\tau}\nonumber\\ & \quad+a^{\textrm{t}}_{\textrm{out}0}ik_0\xi_{\textrm{d}_a}+ \mathcal{r}a_{\textrm{vac}},\nonumber\\ % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % b^{\textrm{t}}_{\textrm{out}}&= \frac{1}{\sqrt{2}}\mathcal{t}(a_{\textrm{l}}+c_{\textrm{vac}})+ \mathcal{r}b_{\textrm{vac}}\nonumber\\ & \quad-\frac{1}{\sqrt{2}}\mathcal{t}a_{\textrm{l}0}ik_0 ( \xi_{\textrm{l}}-\xi_{\textrm{bs}}+\eta_{\textrm{bs}}-\xi^{\textrm{f.t.}}_{\textrm{gw } } -i\omega\tau\eta_{\textrm{d}_b})\nonumber\\ & \quad+r^2te^{3i\omega_0\tau}b_{+0}2ik_0\ , \frac{(\eta_{b_2}-\xi^{\textrm{r.t.}}_{\textrm{gw}})e^{i\omega\tau}-\eta_{a_2 } } { \mathcal{t}^2_{\omega_0+\omega}}\,e^{i\omega\tau}\nonumber\\ & \quad+b^{\textrm{t}}_{\textrm{out}0}ik_0\eta_{\textrm{d}_b}.\end{aligned}\ ] ] two different regimes of detection of transmitted signals are possible .1 . resonant regime .let both cavities be tunes to resonance. in this case amplitude detector measures coinciding with the amplitude quadrature measured by the homodyne detector , since is a pure real quantity .resonant regime means that we are tuned to the peak of the resonant curve . at this operating point variation of optical wave amplitudeis very weak ( ) .therefore , neither amplitude detector nor homodyne detector measuring amplitude quadrature can be used .instead , one should use the homodyne detector that measures the phase quadrature .however , in this case all the homodyne detectors should be synchronized with enough accuracy so that they have equal mean phase .otherwise , different detectors will measure slightly different quadrature components . 2 . non - resonant regime . in this casewe are tuned to the slope of the resonance curve and variation of amplitude of the optical wave is .amplitude detection can be used then .its major advantage is that it does not require synchronization between different amplitude detectors .one can also use homodyne detectors to measure amplitude or phase quadratures .it this case , however , synchronization between detectors will be required .let detectors and be homodyne detectors for definiteness so that they measure the quadratures of and correspondingly .the reference oscillations should be produced by some local sources , for instance , lasers that have the same carrier frequency and are synchronized with laser l. as usually required for the homodyne detectors , the amplitudes of these local oscillators are assumed to be much larger than the mean output amplitudes and . in this casewe can neglect their intrinsic noises ( laser noises ) .however , they are required to be synchronized , i.e. have the same homodyne phase in order to measure identical quadratures .once the quadratures are measured they can be stored in a computer memory and later processed .for instance , an experimentalist may produce any desired linear combinations between them .let s consider the cosine quadratures of the signals for definiteness .a simple subtraction of quadrature from quadrature , evidently , cancels the term containing laser phase noise .similar , one can operate with the sine quadratures .this can be thought of as a possible method of laser noise cancelation from transmitted waves . in the case ofreflected waves elimination of laser noise takes place at the level of interference of field amplitudes and further recording of laser - noiseless field in the form of quadratures . in the case of transmitted waves we first record the quadratures containing laser noise andthen linearly combine them to produce the laser - noise - free quantity .however , the change in a sequence of procedures ( to combine first and then record or first record and then combine ) does not introduce any meaningful physical difference , therefore , in theory we may operate with transmitted field amplitudes without the need to perform cumbersome calculations with their quadratures . keeping this in mindwe construct the following combination of the fields : \nonumber\\ & \quad+\frac{1}{\sqrt{2}}\,r^2te^{3i\omega_0\tau}a_{+0}\,2ik_0\ , \frac{\eta_{b_2}-\xi_{b_1}-2\xi^{\textrm{r.t.}}_{\textrm{gw } } } { \mathcal{t}^2_{\omega_0+\omega}}\,e^{2i\omega\tau}\nonumber\\ & \quad-\frac{1}{\sqrt{2}}\,r^2te^{3i\omega_0\tau}a_{+0}\,2ik_0\ , \frac{\eta_{a_2}-\xi_{a_1}}{\mathcal{t}^2_{\omega_0+\omega}}\,e^{i\omega\tau}\nonumber\\ & \quad+a^{\textrm{t}}_{\textrm{out}0}ik_0(\eta_{\textrm{d}_b}-\xi_{\textrm{d}_a } ) .\label{eq_transmitted_signal_michelson}\end{aligned}\ ] ] here we assumed that due to equal characteristics of the cavities .again , one can establish the equivalence between this differential signal with the transmitted signal ( [ eq_transmitted_signal_fp ] ) of a single cavity .now we have two laser - noise - free signals ( [ eq_reflected_signal_michelson ] ) and ( [ eq_transmitted_signal_michelson ] ) which can be combined to cancel one of the fluctuative degrees of freedom .since there are four such quantities , , , and , we should somehow suppress two more degrees of freedom _ by hands _( the last one will be eliminated by the additional interferometer , see below ) .we introduce the following model assumptions ( see fig .[ pic_michelson_fp ] ) : 1 .both the input mirrors are rigidly attached to beamsplitter , i.e. and .the composite mass will be called platform with associated fluctuative degree of freedom .detectors and are rigidly attached to the end - mirrors and respectively , i.e. and .corresponding platforms will be called and and their differential degree of freedom .the realizability of these requirements remains an open practical question .let us now substitute the introduced relations between displacements into signals ( [ eq_reflected_signal_michelson ] ) and ( [ eq_transmitted_signal_michelson ] ) and rewrite them in terms of the input amplitude : ,\end{aligned}\ ] ] from these signals we can exclude either or .the following linear combination cancels the later quantity : \nonumber\\ & \quad-\frac{1}{2}\,\frac{rt^2e^{2i\omega_0\tau } } { \mathcal{t}^2_{\omega_0}\mathcal{t}^2_{\omega_0+\omega}}\ , a_{\textrm{l}0}\frac{\omega_0}{\omega}\,h\nonumber\\ & \qquad\times\bigl[\mathcal{t}^2_{\omega_0+\omega}(1-e^{i\omega\tau})^2 + i\omega\tau\mathcal{t}^2_{\omega_0}(1-e^{2i\omega\tau})e^{i\omega\tau}\nonumber\\ & \qquad\qquad-(\omega\tau)^2\mathcal{t}^2_{\omega_0}e^{2i\omega\tau}\bigr ] , \label{eq_end_mirr_noise_free_signal}\end{aligned}\ ] ] where is the combined vacuum noise .it is straightforward to verify that in the long - wave ( ) and narrow - band ( , where is the cavity half - bandwidth ) approximations signal reduces to : where is detuning from resonance .the tidal structure of metric ( [ eq_metric_tensor ] ) immediately suggests the method of cancelation of beamsplitter platform noise . consider a scheme with two michelson / fabry - perot interferometers having common central platform ( see fig .[ pic_double_michelson_fp ] ) .let us assume that we have eliminated displacement noise of the end - platforms of the second ( left - bottom ) interferometer and obtained the signal containing the fluctuations of central platform and the gw .this signal can be evaluated straightforwardly from formula ( [ eq_end_mirr_noise_free_signal ] ) replacing and keeping the gw function unchanged due to the symmetry of gw wavefront . ultimately , adding to we obtain signal free from displacement noise of the cental ( beamsplitters ) platform : .\label{eq_dfi_mfp_noise_free_signal}\end{aligned}\ ] ] here describes total vacuum noise in both interferometers . in long - wave and narrow - band approximationswe obtain : + \sqrt{2}\bigl[b^{(1)}_{\textrm{vac}}+b^{(2)}_{\textrm{vac}}- a^{(1)}_{\textrm{vac}}-a^{(2)}_{\textrm{vac}}\bigr]\nonumber\\ & \quad-\frac{\gamma}{\gamma - i\delta}\,a_{\textrm{l}0}\ , \frac{1}{(\gamma - i\delta - i\omega)\tau}\,ik_0(\omega\tau)^2\,\frac{1}{2}\,lh .\label{eq_dfi_mfp_noise_free_signal_long - wave}\end{aligned}\ ] ] here vacuum fields with upper index denote the vacuum fluctuations in detector ports of the first interferometer and vacuum fields with index denote the vacuum fluctuations in the corresponding ports of the second interferometer .it is convenient for methodological purposes to compare the susceptibilities to gws of the considered interferometer and of the conventional interferometer with michelson / fabry - perot topology ( without any recycling mirrors ) which response is described by the formula : let the readout schemes in both interferometers register the following quadrature(s ) : where is given either by formula ( [ eq_dfi_mfp_noise_free_signal ] ) for displacement - noise - free double michelson / fabry - perot topology ( without term ) , or ( [ eq_mfp_response ] ) for conventional michelson / fabry - perot topology . to compare the gw sensitivities we define the transfer function as the ratio of gw response quadrature to .we plotted the absolute values of both transfer functions in fig .[ pic_transfer_function ] for the following parameters ( most close to advanced ligo ) : km and . for comparison we chose two values of detuning for each system : hz .hz ) , and for double michelson / fabry - perot dfi ( hz ) .[ pic_transfer_function ] clearly demonstrates strong gw suppression at low frequencies according to the predicted -law in eq .( [ eq_dfi_mfp_noise_free_signal_long - wave ] ) .however , at higher frequencies hz both traditional and dfi topologies acquire approximately equal level of gw susceptibility .let us briefly summarize all the essential model assumptions that we used for our gedanken experiment . 1 . for a single michelson / fabry - perot interferometer we assumed that the end - photodetectors are rigidly attached to the end - mirrors and input mirrorsare rigidly attached to the beamsplitter . for a pair of interferometers we assumed that both beamsplitters and all the input mirrors are mounted of the common central platform .although these assumptions do not contradict any fundamental principles , the question of their practical realization is highly questionable , at least for the ground - based facilities .in addition , even such a composite platform does not allow full cancelation of its internal thermal noise ( our substitution is valid for displacement of the center of mass only ) .however , in principle , one may think of constructing a space - based interferometer with arm - cavity lengths of several hundreds meters or kilometers which will be most sensible to gws at frequencies below 1 hz . at such low frequencies the model of rigid platformmay look more attractive from the experimental point of view than that at higher frequencies in the earth - bound environment . another way to soften the complexity of the optical scheme is to `` squeeze '' geometrically the additional interferometer so that both interferometers share the same beamsplitter .nevertheless , this does not cancel the requirement that all the input mirrors are attached to the beamsplitter .if the end - detectors of the cavities operate as homodyne detectors then the local oscillators are assumed to be present .the frequency of these local oscillators should coincide with the carrier frequency of the source laser , so they should be kept synchronized with it . in additionall the homodyne detectors themselves should be synchronized with each other in such a way that they have identical homodyne phase , otherwise , they will measure different quadrature components .the use of amplitude detection looks much more attractive from the practical point of view .in this paper in a form of gedanken experiment we have analyzed the operation of a double michelson / fabry - perot interferometer performing the laser- and displacement - noise - free gravitational - wave detection .it has been demonstrated that if certain model requirements met ( input mirrors and beamsplitters can be rigidly mounted on a single platform and end detectors can be rigidly attached to the end mirrors ) it is possible to construct such a linear combination of interferometer responses ( their quadratures ) that produces the strongest displacement - noise - free response to the gravitational wave allowed by general relativity .namely , the dfi response function turns out to be proportional to , where is the gw frequency , with being the length of interferometer arms and is the cavity bandwidth .however , the question of practical realizability of our model assumptions is open for criticism . the authors would like to thank f.ya .khalili and s.l .danilishin for valuable critical remarks and comments on the paper .we also would like to express our gratitude to a. freise , s. hild and s. chelkowski for the hospitality and support during our stay at birmingham university and the inspiring discussions which greatly helped to improve our research .
in this paper we demonstrate that a double michelson interferometer with fabry - perot cavities in its arms is able to perform laser- and displacement - noise - free gravitational - wave ( gw ) detection if certain model assumptions are met . assuming the input mirrors of a single michelson / fabry - perot interferometer can be rigidly attached to beamsplitter on a central platform one can manipulate with interferometer s response signals in a way to cancel laser noise and displacement noise of all test masses except the cental platform . a pair of symmetrically positioned michelson / fabry - perot interferometers with common central platform can be made insusceptible to the later then , thus allowing complete laser- and displacement - noise - free interferometry ( dfi ) . it is demonstrated that the dfi response to gws of the proposed interferometer is proportional to , where is the cavity half - bandwidth , that is the strongest dfi response allowed by general relativity .
the explanation of the origin of power law or scale invariant features associated with complex systems has been an active and fascinating area of research in statistical physics .this feature is well recognized in natural science as zipf s law that states how frequently a word occurs in a given typical text is inversely proportional to its rank such as with close to 1 .typical instances range from the size - distributions of cities , commercial firms , and fluctuations in financial markets , to diameters of moon craters ( see ref . for more examples ) .however , in many situations the exponent is not exactly 1 but takes value in [ 0 , 1 ] .although frequency versus rank plots are not common in statistical physics , the frequency distribution has a decaying power law dependence , with the two exponents being related ; the exponent for frequency distribution .recently , a state or sample space reducing ( ssr ) stochastic process has been proposed as a mechanism to explain scale invariance , in particular , zipf s law .earlier proposed mechanisms include self - organized criticality , preferential attachment , a combination of exponentials , an inverse of quantities , and multiplicative process .ssr mainly reflects the process of ageing in complex systems , where the size of state space , namely the set of all possible allowed states , reduces as time advances .striking examples of an ssr include the process of sentence formation in linguistics , fragmentation of materials , polymerisation process , and diffusion on weighted , directed and acyclic graphs . the sample space may not be strictly reducing in some cases , but one can incorporate the occasional expansion as a fluctuation ; this is termed a noisy ssr process . a mapping between seemingly different problems in statistical physics not only facilitates computation but can also provide deeper insight .interesting examples from this point of view include the mapping of percolation on trees to brownian excursions , or the directed abelian sandpile on a narrow strip to a random walk on a ring or the fragmentation and aggregation process in polymerization to the study of cycles in random permutation with uniform measure . in the present context , the mapping between survival time statistics for ssr processes and records statistics of independent and identically distributed ( iid ) random variables is most pertinent . in this work ,our focus is on the _ noisy _ ssr process .we first review the recent developments on the ssr process ; in previous work the statistics of survival times , defined as the life span of the ssr process was studied .exact results , supported by simulations , showed that both mean and variance of the survival time vary in a logarithmic manner as a function of system size , and the asymptotic probability distribution is gaussian . the correspondence noted here , between the survival time statistics for the ssr process and the records statistics of iid random variables not only provides a deeper insight about the system , but it also becomes apparent to identify other applications of the ssr process , for example , polymerization process to cycles statistics in random permutation . we show here that the records statistics in a random time series , suitably modelled as a correlated random process ( such as random walk ) is potentially equivalent to noisy ssr process , and can be completely characterised by a parameter .studies of records statistics are relevant in many applications such as financial time series or price fluctuations of a commodity that are examples of correlated noisy signal .furthermore , the noisy process with is equivalent to the records statistics problem in standard brownian motion or in the standard random walk .the organization of the paper is as follows .the definition of the noisy ssr process with detailed numerical simulation results for the survival time statistics is presented in sec .ii . subsequently , analytical and numerical results for the records statistics in correlated random events are studied in sec .finally , our results are summarized in sec .iv which also includes a discussion .we begin with recalling the definition of ssr stochastic process .this can be visualised as a directed random hopping process on , the set of positive integers .the state of the process is , say , , where , meaning the walker is at site after time .the boundary conditions are and , and the life span or survival time of the process is a discrete random variable such that .the dynamical rules are the following : if the walker is at site with after time , then at time it can go to any site in the interval ] with probability .the same process is repeated until the walker reaches to site 1 .clearly , the process is strictly reducing .since represents that there is only one state for the system under consideration , this would act as an absorbing state .one available state means the system s state does not change as a function of time , meaning no dynamics .further , consider an unconstrained random hopping that can be seen as random hopping on , without any restriction .denote the state of this process as with . here , the dynamics is such that walker can jump from site to any available site with uniform probability. then the noisy ssr process can be described as a superposition of ssr and unconstrained random hopping where the ssr process is executed with probability .the state of the noisy ssr process is evolved in time as x_(t ) = x(t ) + ( 1-)x_0(t ) , with ] .we first present simulations using monte carlo method .numerical results , shown in figs . [ fig1 ] and [ fig2 ] suggest that mean and variance of the survival time as a function of the system size , within statistical error , follow the simple scaling behaviour ~n^ , and _^2 ~n^2 , with .this relation will be derived below .one can also immediately note that the relative fluctuation is independent of .further , the survival time distribution for the noisy ssr process exhibits a universal scaling behaviour independent of and , when plotted with scaled argument variable of a function [ see figs .[ fig3 ] and [ fig4 ] ] .this suggests that the survival time distribution can be expressed as _ n ( ) ~n^-j ( ) , where the universal scaling function has an exponentially decaying form such as j ( ) ~(- ) , where is a constant . clearly , for large .let us introduce an indicator variable such that it is on or 1 if the noisy ssr process visits site after time and off or 0 otherwise . with this indicator variable on an average how many times the walker visits site can be easily computed as = p(k) ] has the average number of divisors that varies logarithmically as a function of . dividing a number is basically reducing its size .but , the behaviour of variance is not analytically known and numerical results suggest that this does not vary in a logarithmic manner as in the case that of the survival time for the ssr process .the crucial point here is that in order to ensure that there is an exact correspondence [ as discussed later ] , the complete statistics should behave similarly .for this reason , it becomes important to numerically check not only both the mean and variance but the entire distribution of the survival time in the noisy ssr process in its original version .consider a discrete time symmetric random walk with a constant drift term described as x(t+1 ) = x(t ) + c + ( t ) , [ eq1 ] where denotes the drift and is a random variable with cauchy distribution f ( ) =. set initial condition and the process is executed for time such that . in order to generate , the random variables with such distribution , first generate uniformly distributed random variables in the range ] that leads to ] . since in this caseeach step may form a record once in a single realization , the visiting and occupation probabilities are directly related .the analysis [ from eq .( 6 ) to ( 9 ) ] done in the sec .ii shows that .numerical results shown in fig . [ fig6 ] support this observation .it can be observed that the statistics of survival time in the noisy ssr process is equivalent to the statistics of the total number of records formed in time series of length modelled by eq .( [ eq1 ] ) that describe correlated random variables .let us recall the indicator variable such that it is on or 1 if forms a record that is it is better than all previous entries or otherwise . with this indicator variable total number of records can be easily computed as .further , the mean records is , where . when or is positive and large , each event is likely to surpass the previous event .consequently , the visiting probability would be uniform .on the other extreme , or is negative and small . for the site, the event would form a record with probability as each event would behave independently and occurring with equal probability . the chance for a site beyond the first site to form a record rapidly decreases as its distance increases from the first site , since this follows a power law behaviour in the visiting probability with exponent approximately equal to 1 .since is the initial condition , any positive event would form a record at the first step . for iid random variables ,the first event forms a record with unit probability , but for the correlated time series modelled as symmetric drifted random walk with zero drift can have either positive or negative value with equal probability .hence the first event forms a record with 1/2 probability .further , it is noted that in a random time series a record would form at a site or time once , but in noisy ssr process the same site may be visited many times .although there are several stochastic processes studied under the field records statistics , the case that we consider is the most useful as it is simple and able to explain the entire spectrum of exponents of zipf s law observed for noisy ssr process .we have studied the survival time statistics in noisy ssr process . based on numerical simulationswe find that the mean and variance both satisfy a simple scaling behaviours as with .in addition , the survival time probability distribution has the form , where the universal scaling function is exponentially decaying .these results enable us to identify a map between the survival time statistics in noisy ssr process and the records statistics in a symmetric random walk with a drift term and with the jumps following a cauchy distribution . established analytical results for records statisticsthereby provide an enhanced understanding of the noisy ssr process .in addition , the records statistics in correlated random events explain the entire spectrum of exponents of the zipf s law . as noted that the ssr process has correspondence with records statistics in iid random variables and the records statistics in this case can be mapped with statistics of cycles in random permutation with uniform measure .polymerization that consists of fragmentation and aggregation processes can be mapped to the formation of cycles in random permutation with uniform measure .it is suggestive , using transitivity , that the ssr process can apply to such situations as well .acy thanks r. ramaswamy for critical reading of the manuscript and providing useful comments . e. w. montroll and m. f. shlesinger , proc . natl . acad .u. s. a. * 79 * , 3380 ( 1982 ) ; b. j. west and m. f. shlesinger , int .b * 3 * , 795b ( 1989 ) .d. sornette , .a. amir , y. oreg , and y. imry , .
we study survival time statistics in a noisy sample space reducing ( ssr ) process . our simulations suggest that both mean and standard deviation scale as , where is the system size and is a tunable parameter that characterizes the process . the survival time distribution has the form , where is a universal scaling function and . analytical insight is provided by the equivalence between survival time statistics in the noisy ssr process and record statistics in a correlated time series modelled as drifted symmetric random walk with cauchy distributed jumps .
in recent years , multiple - input multiple - output ( mimo ) broadcast channel ( bc ) systems , constructed by an access point with multiple antennas and many users , have been intensively studied . in a mimo bc ,multiple users are simultaneously served through independent user specific multiple data streams and a _ multiplexing gain _ is attained as in point - to - point mimo .the capacity region of the gaussian mimo bc was derived in where dirty paper coding ( dpc ) is known to be a capacity achieving scheme . because dpc is hard to implement ,many practical techniques have been proposed such as zero - forcing precoding ( channel inversion ) and tomlinson - harashima precoding . in these schemes ,multiuser interference is pre - canceled at the transmitter with perfect channel state information at the transmitter ( csit ) .csit can be obtained by reciprocity between uplink and downlink channels in time division duplexing ( tdd ) systems and feedback from receivers in frequency division duplexing ( fdd ) systems . in fdd systems , the amount of feedback information is in general limited and hence perfect csit is not available .the accuracy of csit depends on both the type of feedback technique and the amount of feedback overhead allowed .a popular feedback architecture is a codebook approach where an index of a codeword in a predetermined codebook is fed back to the transmitter .there have been many studies on the performance of codebook based multi - user mimo systems using various transmission schemes such as zero - forcing ( zf ) beamforming , block diagonalization ( bd ) , and the unitary precoding . in limited feedback environments ,a key difference between mimo bc and point - to - point mimo is the multiplexing gain achievability . in point - to - point mimo ,a full multiplexing gain is achievable even with open - loop transmission . on the other hand, a full multiplexing gain can not be achieved using a finite amount of feedback information in a mimo bc .the multiplexing gain of mimo bc rather diminishes in the high signal - to - noise ratio ( snr ) region due to imperfect orthogonalization resulting from inaccurate csit . to maintain the multiplexing gain ,it was shown in that the feedback size should linearly increase with snr ( in decibel scale ) . sincea large amount of feedback is a heavy burden on uplink capacity , many studies have been devoted to increasing the efficiency of limited feedback . in ,a feedback reduction technique has been proposed using multiple antennas at the receiver .user selection in mimo bc has been studied to reduce the amount of uplink feedback . in ,random beamforming was generalized and semi - orthogonal user selection was proposed .also , it was shown that channel quality information as well as channel direction information are necessary to obtain both the maximum multiplexing and diversity gains . in ,a dual - mode limited feedback system was proposed to switch between single user and multiuser transmissions .the authors in investigated two partial feedback schemes for user scheduling .in practical systems , the uplink capacity of control channels is typically limited and shared among multiple users . a sum feedback rate constraint in space division multiple access ( sdma )was considered in but the amount of feedback information per user was held constant . in ,the optimum feedback size per user and the number of feedback users were investigated under a sum feedback rate constraint assuming all users employ the same amount of feedback .recently , strategies of feedback bit partitioning between the desired and interfering channels proposed in for a cooperative multicell system . in -user multiple - input - single - output( miso ) interference channel , the feedback rate control to minimize the average interference power was proposed in . in mimo bc ,the effects of different amounts of feedback size among the users are studied in . in ,the feedback rate sharing strategy has been proposed to minimize the upper bound of sum rate loss in correlated single - polarized and dual - polarized channels , respectively .the feedback rate sharing strategies in the low and high snr regions have been proposed in terms of the correlation coefficient .the feedback rate sharing strategy to increase the sum rate was also proposed in by considering users path losses , where the system performance was shown to be improved by changing feedback bit allocation according to the path losses. however , when the path losses are similar , the feedback rate sharing strategy in is to equally share the sum feedback size regardless of snr levels but it is not optimal in some snr regions . also , the effects of path losses are canceled out in the high snr region so that equal sharing of the sum feedback size is not optimal any more . the feedback rate sharing strategy to minimize total transmission power for given users outage probabilitieswas proposed in . in this paper, we provide a new analytical framework for the feedback rate sharing strategy and rigorously analyzed the effects of different amounts of feedback information among users by extending and generalizing the results of .the effects of feedback rate sharing on the achievable rate are investigated in a mimo bc with zf beamforming at the transmitter and random vector quantization ( rvq ) at each user .we derive the optimal feedback rate sharing strategies according to various snr regions .our analytical results prove the optimal feedback rate sharing strategy in the low and the high snr regions .the feedback rate should be equally shared among all users in the low snr region while the whole feedback rate should be allocated to a single user in the high snr region . for the mid - snr region ,we establish a simple numerical method for finding the optimal feedback sharing strategy based on our analytical framework . through the proposed numerical method, we find that to equally allocate whole feedback size to a partial number of users is the optimal feedback rate sharing strategy . for the users suffering different path losses ,we show that the proposed numerical method can be applicable to finding the optimal feedback rate sharing strategy . in the high snr region , we prove that the effects of path losses are canceled out and hence the optimal feedback strategy is to allocate the whole feedback size to a single user with the highest snr . our proposed feedback rate sharing strategy derived from the system with zf beamforming and rvqis also evaluated for the systems with other techniques such as stream control , regularized zf transmission scheme and spherical cap codebook model .our numerical results show that our proposed feedback rate sharing strategy is still valid for other configurations .the rest of this paper is organized as follows .we describe the system model and formulate the problem in section ii .the impacts of asymmetric feedback size among users are investigated in section iii .the optimal sum feedback rate sharing strategy is derived in section iv .the numerical results are shown in section v. section vi concludes our paper .our system model is depicted in fig .we consider a mimo bc with transmit antennas and users having a single antenna .if the receiver has multiple antennas , each antenna can be considered as an independent user , or receive combining discussed in can be adopted .the received signal at the user becomes where is the path loss of the user , is a channel vector whose entries are independent and identically distributed ( i.i.d . )circularly symmetric complex gaussian random variables with zero mean and unit variance , is the transmit signal vector , is a complex gaussian noise with zero mean and unit variance , and the superscript denotes conjugate transposition of a vector . when is the transmit signal power , satisfies that =p ]is constructed with the quantized csi fed back from the users .the normalized column vector of becomes the precoding vector for the user , , where denotes the matrix inversion .thus , we can decompose as , where ] becomes where .for an arbitrary codeword , is a squared inner product of two independent random vectors isotropic in , so follows the beta distribution with parameters ( ) becomes . ] with parameters .consequently , a quantization error using -bit rvq , , becomes the minimum of independent beta distributed random variables with parameters .correspondingly the complementary cumulative density function ( cdf ) of is given by }=\left(1-z^{m-1}\right)^{2^{b_k}}. \label{eqn : qe_cdf}\end{aligned}\ ] ] we assume an _ average _ feedback size allocated for each user is so that the total feedback rate ( i.e. , the sum of all individual users feedback rates ) becomes bits per channel realization .assuming the feedback rate sharing among users , each user uses -bit feedback and the sum feedback rate constraint becomes . since codebook size is typically a non - negative integer number of bits , we restrict the average feedback size , , as an positive integer , i.e. , .for the same reason , we assume the feedback size at the user , , as a non - negative integer , i.e. , for , from individual feedback rates , a feedback rate sharing strategy can be expressed by -dimensional vector ,\end{aligned}\ ] ] and the sum feedback rate constraint becomes where is the vector one norm . from, we obtain the average sum rate as a function of transmit power , , and the sum feedback rate sharing strategy , , denoted by given by .\label{eqn : sum_rate}\end{aligned}\ ] ] thus , we solve the following problem : }{\textrm{maximize } } & \qquad \mathcal{r}(p , \mathbf{b } ) \label{eqn : optimization_problem}\\ \textrm{subject to } & \qquad \sum_{k=1}^k b_k = k\bar{b},\label{eqn : constraint1}\\ & \qquad b_k\in \{0\ } \cup \mathbb{z}^+ \quad k=1,\ldots , k. \label{eqn : constraint2}\end{aligned}\ ] ] note that the optimal sum feedback rate sharing strategy will be derived later and shown to be dependent on the snr value . therefore , the feedback bits are reallocated each time when the snr changes . in practical scenarios ,several allocation patterns can be constructed offline for typical snr values and then the transmitter can broadcast an appropriate allocation pattern using the current snr .to find the optimal feedback rate sharing strategy , we first analyze the impact of asymmetric feedback sizes among the users on the sum rate . for the simplicity ,we define three random variables where is the channel gain , is the squared inner product between the normalized channel vector and the beamforming vector , and is the sum of the squared inner products between the normalized channel vector and the other beamforming vectors . note that is not affected by the feedback size of the user since is selected in the null space of . using the quantization error defined in , we can decompose into where is an unit vector such that .the random variable becomes where the random variable is the sum of the square of inner products between the quantization error vector and the beamforming vectors of other users .the independency between and is shown in [ 12 ] from the fact that the magnitude of the quantization error , is independent of the direction of quantization error , .thus , we can easily find that and are independent .we start from the following lemma .[ lemma : rv_properties ] the random variables , , and have following properties .1 . invariant with the feedback sizes , , the distributions of , , and are identical for all users , respectively , i.e. , where , , and are the marginal pdfs of , , , respectively , 2 . , , and are independent of , respectively .the joint pdf of , , and are identical for all users , i.e. , where is the joint pdf of , , and .see appendix a. [ lemma : capacity_k ] the achievable rate of the user is determined by only its own feedback size and is independent of the other users feedback sizes . from lemma [ lemma : rv_properties ], we can rewrite the average sum rate in as {\nonumber\\}&= \sum_{k=1}^{k}\mathbb{e}_{q_1,x_1,w_1,z_k}\left [ \log_2\left ( 1+\frac{\frac{p}{m } q_1x_1}{1 + \frac{p}{m } q_1 w_1 z_k } \right ) \right].{\nonumber}\end{aligned}\ ] ] thus , the achievable rate at the user is dependent on only its own feedback size because , , and are not affected by the feedback size as noted in lemma [ lemma : rv_properties ] .since the distribution of is a function of , the achievable rate at each user is only affected by its own feedback size .thus , the achievable rate of the user becomes a function of transmit power and own feedback size denoted by such that , \label{eqn : capacity_k}\end{aligned}\ ] ] and it satisfies that .to verify lemma [ lemma : capacity_k ] , two feedback scenarios ] are considered in zf mimo bc with , . in fig .[ fig : capacity_user1 ] , the sum rate for the first scenario is much higher than that for the second scenario due to the larger amount of total feedback information .as predicted in lemma [ lemma : capacity_k ] , however , the achievable rate of user 1 is the same in the two scenarios .lemma [ lemma : capacity_k ] indicates that a feedback size of a user does not affect the achievable rates of the other users and only changes its own achievable rate . under a sum feedback rate constraint ,an increase of one user s feedback size necessarily decreases other users feedback sizes . with more accurate , the transmitter can pick the beamforming vectors of other users in more accurate null space of the user .hence , the user benefits from less interference from other users . on the other hand , the other users experience more interference since the accuracy of the users channel knowledge degrades under the sum feedback rate constraint .consequently , when a user increases its own feedback size , the achievable rate of the user increases but the achievable rates of the other users decrease , and vice versa .the optimal feedback rate sharing strategy starts from this fundamental tradeoff .in the low snr region , the achievable rate of the user given in becomes { \nonumber\\}&=\lim_{p\to 0}\mathbb{e}\bigg[\log_2\left(1+\frac{p}{m } q_1 x_1\right ) + \log_2\left(1+\frac{\frac{p}{m}q_1w_1z_k}{1+\frac{p}{m}q_1x_1}\right){\nonumber\\}&\qquad- \log_2\left(1+\frac{p}{m}q_1w_1z_k\right ) \bigg ] { \nonumber\\}&\stackrel{(a)}{= } \frac{1}{\ln 2}\mathbb{e}\left[\frac{p}{m } q_1 x_1\right ] -\frac{1}{\ln 2}\mathbb{e}\left[\frac{\frac{p^2}{m^2}q_1 ^ 2x_1w_1z_k}{1+\frac{p}{m}q_1x_1}\right ] { \nonumber\\}&\stackrel{(b)}{= } \frac{1}{\ln 2}\mathbb{e}\left[\frac{p}{m } q_1 x_1\right ] -\frac{1}{\ln 2}\mathbb{e}\left[\frac{\frac{p^2}{m^2}q_1 ^ 2x_1w_1}{1+\frac{p}{m}q_1x_1}\right ] \cdot \mathbb{e}[z_k],{\nonumber}\end{aligned}\ ] ] where the equality holds because , and the equality holds from the fact that is independent of , , and from lemma [ lemma : rv_properties ] . in the low snr region , therefore , the optimization problem is equivalent with the following problem : }{\textrm{minimize } } & \qquad \sum_{k=1}^k \mathbb{e}[z_k ] \label{eqn : optimization_problem_l}\\ \textrm{subject to } & \qquad \eqref{eqn : constraint1},\eqref{eqn : constraint2}. { \nonumber}\end{aligned}\ ] ] for a vector , we denote by the vector with the same components , but sorted in decreasing order . for given vectors such that , we say majorizes written as when & \ge \sum_{i=1}^n [ \mathbf{a}_2^\downarrow]_i \qquad 1\le n \le m,\end{aligned}\ ] ] where ] , the range of becomes ] only .for the quantization error ] .however , note that when although ] , ] , ] because the achievable rate for other strategies can be easily obtained from lemma [ lemma : capacity_k ] . denoting the set of all possible strategies by , the procedure to find the optimal feedback strategy is described in algorithm [ alg : frs_procedure ] .the complexity of the procedure will be analyzed in section [ section : complexities ] .[ alg : frs_procedure ] [ observation : mid_snr ] the optimal feedback rate sharing strategy is to allocate the same amount of feedback to the optimal number of users at given snr ..the optimal feedback rate sharing strategy for a mimo bc [ cols="^,^,^,^,^,^ " , ] [ tab : strategy_number ] for asymmetric path loss cases , without loss of generality we consider the case that . because the larger feedback size yields the higher multiplexing gain , larger feedback sizeshould be assigned to the user with smaller path loss ( i.e. , larger ) .this implicates that the strategy outperforms , i.e. , ) \ge \sum_{k=1}^k \mathcal{r}_k(\gamma_kp , [ \mathbf{b}]_k).{\nonumber}\end{aligned}\ ] ] therefore , the optimal feedback rate sharing strategy is selected in the feedback strategy set defined in .because the number of all possible strategies , i.e. , , is the same for the symmetric and the asymmetric path loss cases , the computational complexity is also the same for both cases . although the equal power allocation with full multiplexing is mainly considered in our manuscript , our feedback rate sharing strategy can readily be extended to the stream control where the transmitter adaptively controls multiplexing gain . for mimo bc , for example , four ways of equal power allocation according to the number of streams ] , ] are possible with the steam control .note that single stream transmission corresponds to the tdma scheme .since we consider zf beamforming at the transmitter , the beamforming vector for each user is randomly picked orthogonal to other users quantized channels .therefore , it can easily be shown that theorem [ theorem : strategy_low ] and theorem [ theorem : strategy_high ] are still valid even with the stream control . in table[ tab : strategy ] , we have found the optimal feedback rate sharing strategy for mimo bc according to the number of streams and snr when total feedback budget is 24bits and the path losses are symmetric . we can also find the optimal feedback rate sharing strategies for asymmetric path losses because lemma [ lemma : capacity_k ] still holds for the stream control and hence the rate of each served user is affected by its own feedback size .in this section , we present numerical results to analyze the effects of feedback rate sharing strategies . in fig .[ fig : sum_rate ] , the average sum rates of a mimo bc using different feedback rate sharing strategies .we consider five feedback rate sharing strategies , [ 2,14 ] , [ 4,12 ] , [ 6,10 ] , [ 8,8] ] achieves the highest average sum rate while allocating the whole feedback rate to a single user ] achieves the highest achievable rate whereas equal sharing of the feedback rate ] , ] , ] ) . in fig .[ fig : sum_rate_tdma ] , we can observe that zf beamforming is inferior to a tdma system in both low and high snr regions although it outperforms a tdma system in the mid snr region . in these regions , it is desirable to adopt the mode switching between zf and tdma for sum rate maximization . in the regularized zf beamforming ,the normalized column vectors of are used for the beamforming vectors where is an identity matrix .although the optimal feedback rate sharing strategy using the regularized zf beamforming is hard to analyze , the feedback rate sharing strategy will be the same with that of zf beamforming case in the high snr region .this is because the regularized zf beamforming vectors correspond to zf beamforming vectors in the high snr region . in fig .[ fig : rvq_tdma_mmse_60bits ] , the average sum rates of a mimo bc using regularized zf beamforming are plotted while other parameters are same in fig .[ fig : sum_rate_tdma ] . as shown in fig .[ fig : rvq_tdma_mmse_60bits ] , the regularized zf beamforming improves zf beamforming especially in the low snr region and hence outperforms tdma in wider snr region .since tdma always achieves a multiplexing gain of one even with blind transmission , tdma system outperforms mimo bc with limited feedback in the high snr region .this is because the achievable rate of mimo bc with finite limited feedback is saturated in the high snr region due to mutual interference .the inferior performance in the high snr region is a fundamental limit of mimo bc with limited feedback .however , it should be noted that zf beamforming can be enhanced by the regularized zf beamforming and our feedback rate sharing strategy enables zf beamforming or regularized zf beamforming to outperform tdma in wider snr region .note that our main contributions are to find the feedback rate sharing strategy and to show the feedback rate sharing strategy ( e.g. , ) enhances the system performance compare to equal feedback rate sharing ( e.g. ] for all .this can be explained from the fact that where and are from the second property and the third property , respectively .[ appendix : strategy_low ] to prove theorem [ theorem : strategy_low ] , we firstly show the average quantization error ] is a discretely convex function of . it was shown in that = 2^{b } \cdot \beta \left(2^{b } , \frac{m}{m-1}\right) ] where .when we define a forward difference function - \mathbb{e}[z_k\vert b_k = b] ] and minimized and maximized when and , respectively .since a discretely convex function has an increasing ( non - decreasing ) forward difference function , ] as stated in . from lemma [ lemma :ez_convexity ] , we know the average quantization error is a convex function of . with the feedback rate sharing strategies ,therefore , we can conclude that \ } \le \sum_{k=1}^k \mathbb{e}\{z_k\vert b_k=[\mathbf{b}_2]_k\},\end{aligned}\ ] ] and equivalently , .[ appendix : strategy_high ] we firstly show that ] . in this case , the forward difference function - \mathbb{e}\left[\log_2 z_k \vert b_k = b \right] ] is a discretely concave function of .in majorization theory , for a concave function , it satisfies that ) \ge \sum_{i=1}^n g([\mathbf{a}_2]_i)\end{aligned}\ ] ] whenever two vectors satisfies . in the high snr region , the average sum rate with feedback rate sharing strategyis related with ] is the concave function of .thus , under the feedback rate sharing strategies , we can conclude that \ } \ge \sum_{k=1}^k \mathbb{e}\{\log_2 z_k\vert b_k=[\mathbf{b}_2]_k\},{\nonumber}\end{aligned}\ ] ] equivalently , . as stated in section [ section :high_snr_region ] , in the high snr region , the achievable rate at each user is dominated by the rate decreasing term .thus , we conclude that the feedback rate sharing strategy for feedback rate sharing strategies .h. weingarten , y. steinberg , and s. shamai ( shitz ) , `` the capacity region of the gaussian multiple - input multiple - output broadcast channel , '' _ ieee trans .inf . theory _52 , no . 9 , pp .39363964 , sep .2006 .i. h. kim , s. y. park , d. j. love , and s. j. kim , `` improved multiuser mimo unitary precoding using partial channel state information and insights from the riemannian manifold , '' _ ieee trans .wireless commun ._ , vol . 8 , no . 8 , pp .40144023 , aug .2009 .y. huang and b. d. rao , `` an analytical framework for heterogeneous partial feedback design in heterogeneous multicell ofdma networks , '' _ ieee trans sig .3 , pp . 753769 , feb .2013 . c. k. au - yeung and d. j. love , `` on the performance of random vector quantization limited feedback beamforming in a miso system , '' _ ieee trans .wireless commun ._ , vol .6 , no . 2 ,458462 , feb .2007 .j. zhang , r. w. heath jr ., m. kountouris and j. g. andrews , mode switching for the multi - antenna broadcast channel based on delay and channel quantization , " _eurasip j. adv . signal process .( special issue multiuser lim .feedback ) _, 2009 , article i d 802548 , 15 pages .
in this paper , we consider a multiple - input multiple - output broadcast channel with limited feedback where all users share the feedback rates . firstly , we find the optimal feedback rate sharing strategy using zero - forcing transmission scheme at the transmitter and random vector quantization at each user . we mathematically prove that equal sharing of sum feedback size among all users is the optimal strategy in the low signal - to - noise ratio ( snr ) region , while allocating whole feedback size to a single user is the optimal strategy in the high snr region . for the mid - snr region , we propose a simple numerical method to find the optimal feedback rate sharing strategy based on our analysis and show that the equal allocation of sum feedback rate to a partial number of users is the optimal strategy . it is also shown that the proposed simple numerical method can be applicable to finding the optimal feedback rate sharing strategy when different path losses of the users are taken into account . we show that our proposed feedback rate sharing scheme can be extended to the system with stream control and is still useful for the systems with other techniques such as regularized zero - forcing and spherical cap codebook . multiple - input multiple - output ( mimo ) broadcast channel , limited feedback , random vector quantization , feedback rate sharing
spatial multiplexing for the multiple - input multiple - output ( mimo ) systems , employing multiple transmit and receive antennas , has been recognized as an effective way to improve the spectral efficiency of the wireless link .more recently , the multiuser schemes have been investigated for the spatial multiplexing mimo systems .this paper focuses on the downlink multiuser schemes in which each user can not cooperate with the others thus suffers from the interference from them .mainly , there are two kinds of multiuser schemes .one is the precoder or the transmit beamforming , such as the dirty - paper coding ( dpc ) and the zero - forcing ( zf ) , etc . , which mitigates the multiuser interference only by processing at the transmitter .the other is the joint transmitter - receiver ( tx - rx ) design , such as the nullspace - directed svd ( nu - svd ) and the minimum total mean squared error ( tmmse ) , etc . in general , the former possesses lower complexity but more performance penalty . with the great development of signal processors , thelatter gradually draws more attention . for the joint tx - rx design , the schemes proposed in minimize mean squared error ( mmse ) , or maximize the capacity under the transmit power constraint . whereas on some occasions , such as the multimedia communication, it is required to minimize the total transmit power while guarantee the quality of service ( qos ) . investigate the beamforming and the power allocation policy when all users are subjected to a set of post - processing signal - to - interference - and - noise ratio ( post - sinr ) constrains in the uplink simo and the downlink miso . extend this work to the downlink mimo and the mimo network , however the mimo systems discussed in are assumed that there is only one substream between each pair of the transmitter and receiver .in other words , only the multiuser interference appears in the so - called diversity mimo system in . for the multiuser spatial multiplexing mimo system ,however , both the multiuser interference between individual users and self - interference between individual substreams of a user should be mitigated .for the downlink , the transmit beamforming affects the interference signature of all receivers , whereas the receive beamforming only affects that of the corresponding user . construct a dual system , called the virtual uplink , and indicate that the virtual uplink can obtain the same post - sinr as the primary downlink .moreover , the receive beamforming matrix of the virtual uplink is identical with the transmit beamforming matrix of the primary downlink .the design of the downlink , therefore , can resort to the virtual uplink . in this paper, we extend the duality derived for mimo network in to the multiuser spatial multiplexing mimo system . according to the uplink - downlink duality, we propose a joint tx - rx scheme to minimize the weighted sum power under the post - sinr constraints of all the subchannels ._ notation _ : boldface upper - case letters denote matrices , and boldface lower - case letters denote column vectors . , , , and denote trace , conjugate , conjugate transposition , euclidian norm and frobenius norm , respectively . denotes a diagonal matrix with diagonal elements drawn from the vector .{i , j} ] denote the ,-th element and -th column of a matrix , respectively .we consider a base station ( bs ) with antennas and mobile stations ( ms s ) each having antennas .there are substreams between bs and ms , that is to say , bs transmits symbols to ms simultaneously .the signal recovered by ms can be written as where is the recovered signal vector . is the transmitted signal vector from bs to ms with zero - mean and normalized covariance matrix . denotes the power vector allocated to ms .a linear post - filter is used to recover an estimation of the transmitted signal vector .the mimo channel from bs to ms is denoted as , and assumed flat faded .hence , its elements are the complex channel gains , and they are independently identically distributed ( i.i.d . ) zero - mean complex gaussian random variables with the unity variance .moreover , the perfect channel state information are assumed available at both transmitter and receiver via some way , for example , channel measurement at receiver and fast feedback to the transmitter for the frequency division duplex ( fdd ) systems , or invoking the channel reciprocity in time division duplex ( tdd ) systems . is used to weight and transform it into a vector . is the noise vector with the correlation matrix . for simplicity , in the sequelwe assume .we design the , and in ( 1 ) to minimize the weighted sum power under the post - sinr constraints , which can be denoted as the following optimization problem . where ^t ] , ] , ( 1 ) can be rewritten into {\bf{x}}_k\\ \!\!\!\!\!\!{}&+{\bf{a}}_k^h{\bf{h}}_k\sum\limits_{i=1,i{\ne}k}^k{{\bf{b}}_{i}diag(\sqrt{{\bf{p}}_i}){\bf{x}}_i}+{\bf{a}}_k^h{\bf{n}}_k \end{aligned}\ ] ] the diagonal elements of the first part in the right - hand side ( rhs ) of ( 3 ) denote the useful signals , and the non - diagonal elements denote the self - interference .the medial and the last parts in the rhs of ( 3 ) denote the multiuser interference and the noise , respectively .moreover , the post - sinr of the ms s -th substream can be denote as if ] , the link power gain between and can be denoted as {m , n } = ||{\bf{a}}_{k , j } ^h { \bf{h}}_k { \bf{b}}_{m , n } ||_2 ^ 2\ ] ] then ( 4 ) can be rewritten into {k , j } } } { { \sum\limits_{i = 1,i \ne j}^l { p_{k , i } [ { \bf{\phi } } _ { k , j } ] _ { k , i } } + \sum\limits_{m = 1,m \ne k}^k { \sum\limits_{n = 1}^l { p_{m , n } [ { \bf{\phi } } _ { k , j } ] _ { m , n } } + \sigma _n ^2 ||{\bf{a}}_{k , j}||_2 ^ 2 } } } \\ \end{array}\ ] ] by substituting ( 6 ) into the constraint inequality of ( 2 ) , we obtain where the -th element of is =\left\ { \begin{array}{cc } -\frac{{[{\bf{\phi}}_{k , j}]_{k , j}}}{{\gamma_{k , j } } } & m=(k-1)l+j\\ { [ { \bf{\phi}}_{k , j}]}_{\left\lceil{\frac{m}{l}}\right\rceil , m-(\left\lceil{\frac{m}{l}}\right\rceil-1)l}&m\ne(k-1)l+j\\ \end{array}\right.\ ] ] where rounds to the nearest integer greater than or equal to .write ( 7 ) into the matrix form , we obtain where and are ^t \\{ \bf{d}}&=&\sigma_n^2\left[{||{\bf{a}}_{1,1}||_2 ^ 2}, .. ,{||{\bf{a}}_{1,l}||_2 ^ 2}, .. ,{||{\bf{a}}_{k,1}||_2^ 2}, .. ,{||{\bf{a}}_{k , l}||_2 ^ 2}\right]^t\\ \end{array}\ ] ] so , ( 2 ) is equivalent to the following optimization problem subsequently , to obtain the lagrangian duality of ( 11 ) , we divide the solving process of ( 11 ) into two steps similar with .first , assuming and are fixed , the lagrangian function of ( 11 ) is where , are the lagrangian multipliers associated with the inequality constraints. then the lagrangian duality of ( 11 ) is according to the slater s condition , ( 11 ) is equivalent to ( 13 ) . since the gradient of the lagrangian function ( 12 ) with respect to vanishs at optimal points , we obtain . substituting it into ( 12 ) , we obtain . moreover , as and , ( 13 ) can be rewritten to similar with ( 6)-(9 ) , substitute ( 10 ) into ( 14 ) , we obtain where {(k-1)l+j}{\bf{i } } \\ \end{aligned}\ ] ] where ] , where is the weight corresponding to the two substreams of ms and . , , the transmit power of ms and total power versus the weight .,width=288 ] fig .[ fig 2],[fig 3 ] plot the curves of the total transmit power versus the post - sinr goal , when . in fig .[ fig 2 ] , , the system configuration satisfies , the multiuser interference , thus , can be effectively suppressed through the beamforming . under the circumstance , increasing the transmit power of any user has nearly no effect to the post - sinr of other users .therefore , all the substreams can attain relatively high post - sinr . in fig .[ fig 3 ] , , does not hold any more . consequently, the multiuser interference can not be effectively mitigated , which means any enhancement in the transmit power of any user is very likely to deteriorate the post - sinr of other users .as shown in fig .[ fig 3 ] , with the user number increasing , the available post - sinr of each user is decreased . when , only db post - sinr can be attained . in these two figures ,the total transmit power increases with the number of users and the post - sinr goal .especially when and db in fig .[ fig 3 ] , due to the residual multiuser interference , the slopes of the curves are much steeper than that in fig .[ fig 2 ] where the multiuser interference is negligible .and the steeper the curves are , the more power would be paid for the unit increase of the post - sinr of each user .[ fig 4 ] shows the curves of the transmit power of ms and the total transmit power versus the weight , when and .the left vertical axis is corresponding to the transmit power of ms and the right one is to the total transmit power . obviously , as the is increasing , the transmit power of ms is decreasing while the total power is increasing , because the optimization object is to minimize the weighted sum power .moreover , when changing from to , the transmit power of ms decreases almost , however the total power increases only about db , which demonstrates that the proposed algorithm adapts the power allocation policy very effectively with negligible penalty on performance .in this paper , we investigate the joint tx - rx design for the downlink multiuser spatial multiplexing mimo system .we show , first , the uplink - downlink duality has the following characteristics : 1 ) in both of the primal downlink and the virtual uplink , the substreams can attain the same post - sinr goal ; 2 ) the beamforming matrices are common in both of the primal downlink and the virtual uplink . based on the duality ,a joint tx - rx beamforming scheme is proposed .simulation results demonstrate that the scheme can not only satisfy the post - sinr constraints which guarantee the performance of the communication links , but also easily adjust the power distribution among users by changing the weights correspondingly , which can be used to diminish the power of the edge users in a cell to alleviate the adjacent cell interference. 1 i. telatar , `` capacity of multi - antenna gaussian channels '' , _ eur .trans . telecommun _ , vol .10 , no . 6 ,pp.585 - 595 , nov./dec .q. caire and s. shamai , `` on the achievable throughput of a multiantenna gaussian broadcast channel '' , _ ieee trans .inform . theory _49 , no . 7 , pp.1691 - 1706 ,july 2003 .q. spencer , a. swindlehurst and m. haardt , `` zero - forcing methods for downlink spatial multiplexing in multiuser mimo channels '' , _ ieee trans .signal processing _ , vol .2 , pp.461- 471 , feb .2004 . z.g .pan , k.k .wong and t.s .ng , `` generalized multiuser orthogonal space division multiplexing '' , _ ieee trans .wireless commun .3 , no . 6 , pp.1969- 1973 , nov .j. zhang , y. wu , s. zhou and j. wang , `` joint linear transmitter and receiver design for the downlink of multiuser mimo systems '' , _ ieee commun .lett _ , vol.9 , pp.991 - 993 , nov .f. rashid - farrokhi , l. tassiulas , and k.j liu , `` joint optimal power control and beamforming in wireless networks using antenna array '' , _ ieee trans .11 , pp.1313 - 1324 , nov . 1998 .f. rashid - farrokhi f. , k.j .liu and l. tassiulas , `` transmit beamforming and power control for cellular wireless systems '' , _ ieee j. sel .areas commun .16 , no . 8 , pp.1437 - 1450 , octchang , l. tassiulas and f. rashid - farrokhi , `` joint transmitter receiver diversity for efficient space division multiaccess '' , _ ieee trans .wireless commun ._ , vol . 1 , no. 1 , pp.16 - 27 , jan . 2002 .s. boyd and l. vandenberghe , convex optimization , _ cambridge : u.k ._ cambridge university press , 2004 . b. song , r.l .cruz and b.d .rao , `` network duality for multiuser mimo beamforming networks and applications '' , _ ieee trans ._ , vol.55 , no.3 , pp.618 - 629 , mar .a.m. khachan , a.j .tenenbaum and r.s .adve , `` linear processing for the downlink in multiuser mimo systems with multiple data streams '' , _ieee icc06 _ , vol .9 , pp.4113 - 4118 , june 2006 .
in the multiuser spatial multiplexing multiple - input multiple - output ( mimo ) system , the joint transmitter - receiver ( tx - rx ) design is investigated to minimize the weighted sum power under the post - processing signal - to - interference - and - noise ratio ( post - sinr ) constraints for all subchannels . firstly , we show that the uplink - downlink duality is equivalent to the lagrangian duality in the optimization problems . then , an iterative algorithm for the joint tx - rx design is proposed according to the above result . simulation results show that the algorithm can not only satisfy the post - sinr constraints , but also easily adjust the power distribution among the users by changing the weights accordingly . so that the transmitting power to the edge users in a cell can be decreased effectively to alleviate the adjacent cell interference without performance penalty . spatial multiplexing , mimo , power allocation , lagrangian duality .
the directed last passage percolation ( dlpp ) problem can be formulated as follows : let be nonnegative independent random variables defined on the lattice , and define the last passage time from to by where denotes the set of up / right paths from to in . of interest are the asymptotics of as , and their first order fluctuations .dlpp is an example of a stochastic growth model , and has many applications in mathematical and scientific contexts .for example , dlpp is equivalent to zero - temperature directed polymer growth in a random environment an important model in statistical mechanics .the model describes a hydrophilic polymer chain wafting in a water solution containing randomly placed hydrophobic molecules ( impurities ) that repel the individual monomers in the polymer chain .due to thermal fluctuations and the random positions of impurities , the shape of the polymer chain is best understood as a random object .the statistical mechanical model for a directed polymer assumes that the shape of the polymer can be described by a directed path , thus suppressing entanglement and u - turns .the presence , or strength , of an impurity at site is described by a random variable , and the energy of a path is given by where is the inverse temperature .the typical shape of a polymer is one that minimizes .of interest is the quenched polymer distribution on paths defined by where and the normalization factor is called the _ partition function _ , and is given by in the zero - temperature limit , i.e. , , the quenched polymer distribution concentrates around paths maximizing , and we formally have directed polymers are related to several other stochastic models for growing surfaces , such as directed invasion percolation , ballistic deposition , polynuclear growth , and low temperature ising models .dlpp with independent and identically distributed ( _ i.i.d ._ ) exponential weights is equivalent to the totally asymmetric simple exclusion process ( tasep ) , which is an important stochastic interacting particle system , and to randomly growing young diagrams . briefly ,the dynamics of tasep involve a particle configuration on the lattice , evolving in time , with the dynamical rule that a particle jumps to the right after an exponential waiting time if the right neighboring site is empty .the correspondence between dlpp and tasep proceeds via the following stochastic corner growth model : partition into squares defined by the edges of the lattice .imagine that at time , all the squares in are colored white , while the remaining squares are colored black . for each , assign a passage time random variable to the square with on the northeast corner .the dynamic rule governing the growth process is the following : a white square at location is colored black exactly time units after both its south and west neighbors become black .the time until square is colored black is exactly last passage time from to the set of all black squares is a randomly growing young diagram .there is a one - to - one correspondence between tasep configurations , and configurations of black and white squares in the corner growth model .the idea is that when a white square is colored black , it corresponds to a particle jumping from a site to its necessarily vacant neighbor .the explicit correspondence is as follows : for every edge separating a white and black square , assign a value of 1 to vertical edges , and a value of 0 to horizontal edges .the tasep configuration corresponds exactly to reading these binary values sequentially from to .we give this correspondence more rigorously in section [ sec : formal - eq ] ( see figure [ fig : tasep ] ) .there are further applications of dlpp in queueing theory , and the model is also related to greedy lattice animals .one quantity of interest in dlpp is the time constant , , given by where .the exact form of is known for _ i.i.d ._ geometric weights , and _ i.i.d . _exponential weights , and is given by where and are the mean and variance , respectively , of the either geometric or exponential weights .for more general distributions , martin showed that is continuous on and gave the following asymptotics at the boundary : in similar fashion to the longest increasing subsequence problem , the fluctuations of for geometric and exponential weights are non - gaussian , and instead follow the tracy - widom distribution asymptotically .it is an open problem to determine and the fluctuations of for weights other than geometric and exponential .we study the dlpp problem with independent weights that are either geometric or exponential , but not identically distributed . for exponential dlpp, we assume that is exponentially distributed with mean where , and we consider the aymptotics as .the setup is identical for geometric dlpp , except that the macroscopic inhomogeneity is in the parameter of the geometric distribution . for directed polymers , this models a macroscopic ( non - random ) inhomogeneity in the strength of impurities ;while for tasep , it corresponds to an inhomogeneity in the rate at which particles move to the right .our main result , presented in section [ sec : results ] , is a hamilton - jacobi equation for the continuum limit of this dlpp problem . in the exponential case with continuous , rolla andteixeira showed that has a variational interpretation .their result is in many ways analogous to the variational problem for the longest chain problem that we exploited in our previous work .macroscopic inhomogeneities have also been considered for tasep , and for other similar growth models .in particular , georgiou et al . proved a hydrodynamic limit for tasep with a spatially ( but not temporally ) inhomogeneous jump rate , which may admit discontinuities .their result gives the limiting density profile in terms of a variational problem , and they connected this to a conservation law in the special case that the rate is piecewise constant with one jump , i.e. , in the context of exponential dlpp , this would be equivalent to assuming that the macroscopic mean is given by for and otherwise .our main result , theorem [ thm : main ] , gives a hamilton - jacobi equation for the limiting time constant in dlpp when the macroscopic inhomogeneity is piecewise lipschitz . in the context of tasep , this allows for a discontinuous inhomogeneous jump rate which has a spatial _ and _ temporal dependence .let us mention the conventions used in this paper .we say is geometrically distributed with parameter if for and , so that we have we say that is exponentially distributed with mean if for we have and when we have with probability one . here we have in order to ensure that our results are applicable to both exponential and geometric dlpp , we parameterize these distributions instead by their mean . for the exponential distributionthere is no change ; we have . for the geometric distribution, we have by that a geometric random variable with mean has parameter for both cases , the variance is of course a function of the mean ; in the exponential case we have , and in the geometric case we have .let us now present our main result .we consider the following two - sided dlpp model , similar to .let be independent nonnegative random variables defined on the lattice , where .let denote the last passage time from to , where and .this is defined as follows : where denotes the set of up / right paths from to in .the macroscopic inhomogeneity is described by functions and , where .specifically , given a parameter we make the following assumption : the term corresponds to the macroscopic mean within the bulk , and the term corresponds to an additional source active only on the boundary .we also assume the weights are either all geometrically distributed , or all expontially distributed .we can construct the random variables on a common probability space as follows .let be _ i.i.d ._ exponential random variables with mean , where . in the exponential case, we can simply set this setup is similar to . in the geometric case , we note that if is an exponential random variable with mean , then for any , is geometrically distributed with parameter . in order to obtain , we need that which gives that .if , then we set .hence , let us set for and when .we make a similar definition for . setting see that are independent geometric random variables satisfying . before stating the , somewhat technical , hypotheses on and , we need to introduce some notation .we say a curve in is continuous and strictly increasing if it can be parameterized in the form where is continuous and strictly increasing , and is an interval in .we make a similar definition for strictly decreasing .notice that a continuous strictly increasing ( resp .decreasing ) curve can also be parameterized in the form where is continuous and strictly increasing ( resp .decreasing ) .for simplicity , we will also use to denote the locus of points that lie on the curve . let be a continuous strictly decreasing curve in ^ 2 ] with .furthermore , depends only on and .let .we will prove the result for ; the case of is very similar .for simplicity of notation , let us set .notice that we can reduce the proof to the case where ^ 2 ] and set .then we have and and .thus let us assume that .let and let such that , , and .define without loss of generality , we may assume that .define .\ ] ] the proof is split into two steps now .we claim that . to see this : first note that and .it follows that where the second line follows from hlder s inequality .we claim now that . to see this : suppose to the contrary that , which implies that . by the definition of we must have and for .this contradicts our assumption that .hence .now we have since on ] .since is continuous and strictly increasing , we can parameterize the portion of that intersects ^ 2 ] .similarly we can parameterize as ,\ ] ] where \to[0,1] ] with . as in theorem[ thm : reg ] , depends only on and .the proof follows from theorem [ thm : reg ] by symmetry .[ rem : reg ] notice in theorem [ thm : reg ] that if then we have the estimate for all ^ 2 ] for every and its hlder seminorm depends only on , , , and .the same remark holds for corollary [ cor : reg ] and .we now plan to use theorem [ thm : reg ] to prove a similar regularity result for .to do this , we relate and via the following dynamic programming principle : [ prop : dpp ] suppose that satisfies ( f1 * ) for , satisfies ( f2 ) , and suppose that is bounded and borel - measurable .then for any we have notice that the boundary source is absent in the term in .this allows us to concentrate much of our analysis on , which involves only the macroscopic inhomogeneities in the bulk , and then extend our results to hold for via the dynamic programming principle .we first note that the maximum in is indeed attained , due to the continuity of restricted to and corollary [ cor : reg ] . if , then in light of , and the fact that , the maximum in is attained at and the validity of is trivial .suppose now that and let denote the right hand side in , and set .we first show that .let and such that , and .let \ , : \gamma(t ) \in \partial { \mathbb{r}}^2_+\big\}.\ ] ] then we have .set and for ] such that and .let with , such that .we can stitch together and as follows then we have where we used the fact that for all , hence . sending we have continuing with the regularity result for , let us introduce a bit of notation .for , let ] .for , is given explicitly by [ cor : ureg ] suppose that satisfies ( f1 * ) for , satisfies ( f2 ) , and suppose that is bounded and borel - measurable .then for every there exists a modulus of continuity , and a constant such that for all ^ 2 ] and set and .as in theorem [ thm : reg ] we may assume that .by proposition [ prop : dpp ] , there exists with such that set .then since and , we have by proposition [ prop : dpp ] that by subtracting from and recalling we have the proof is completed by applying theorem [ thm : reg ] and corollary [ cor : reg ] and noting that + . of course ,remark [ rem : reg ] holds with obvious modifications for and .[ rem : discont ] the hypothesis that the curves are continuous and strictly increasing can not in general be weakened to continuous and non - decreasing .for example , consider the case where on \times[0,1] ] .then we have ,\\ x_1 + x_2 - 0.5 +2\sqrt{(x_1 - 0.5)x_2},&\text{if } x \in [ 0.5,1]\times[0,1 ] , \end{cases}\ ] ] which has a discontinuity along the vertical line , which would correspond to one of the curves on which is discontinuous . in this sectionwe show in theorem [ thm : hjb ] that is a viscosity solution of ( p ) .in fact , ( p ) is the hamilton - jacobi - bellman equation for the simple optimal control problem defined by . for more information on the connection between hamilton - jacobi equations and optimal control problems , we refer the reader to .let us pause momentarily to recall the definition of viscosity solution of where is open , is locally bounded with continuous for every , and is the unknown function .for more information on viscosity solutions of hamilton - jacobi equations , we refer the reader to .we denote by ( resp . ) the set of upper semicontinuous ( resp .lower semicontinuous ) functions on . for ,the _ superdifferential _ of at , denoted , is the set of all satisfying similarly , the _ subdifferential _ of at , denoted , is the set of all satisfying equivalently , we may set and a _ viscosity subsolution _ of is a function satisfying similarly , a _ viscosity supersolution _ of is a function satisfying the functions and are the lower and upper semicontinuous envelopes of with respect to the spatial variable , respectively .we will often say is a viscosity solution of to indicate that is a viscosity subsolution ( resp .supersolution ) of .if is a viscosity subsolution and supersolution of , then we say that is a _ viscosity solution _ of .notice that viscosity solutions defined in this way are necessarily continuous .[ thm : hjb ] suppose that are borel - measurable and bounded .let and set for .if is continuous then satisfies in the viscosity sense .recall that , and . the proof is based on a standard technique from optimal control theory for relating variational problems to hamilton - jacobi equations .the proof is very similar to ( * ? ? ?* theorem 2 ) .we will only sketch parts of the proof here .the proof is based on the following dynamic programming principle which holds for and small enough so that .the proof of is very similar to the proof of proposition [ prop : dpp ] .we now show that is a viscosity solution of .let and let . as in , we can use the dynamic programing principle to obtain suppose now that .then we automatically have furthermore , it follows from that , so we are done .consider now .setting in we have it follows that . by a similar argument we have , and hence we have establishes that is a viscosity solution of now set in and simplify to find that therefore is a viscosity solution of let and let . utilizing the dynamic programing principle again we have if we immediately have if we have that it follows that the supremum in is attained at some . introducing a lagrange multiplier ,the necessary conditions for to be a maximizer of the constrained maximization problem are it follows that and is given by . substituting this into we find that and hence is a viscosity solution of which completes the proof .[ rem : hjb ] it follows from theorem [ thm : hjb ] that is a viscosity solution of ( p ) and satisfies in the viscosity sense .indeed , we can simply apply theorem [ thm : hjb ] with in place of and , in which case we have .we study here the general hamilton - jacobi equation here , , is continuous and monotone , is the hamiltonian , and is the unknown function . for simplicity of notation , we will set throughout much of this section . the case where follows by a simple translation argument .we place the following assumptions on : * for every , the mapping is monotone non - decreasing .* there exists a modulus of continuity such that for all and . the assumption ( h1 ) is clearly satisfied by ( p ) , and generalizes the comparison results in our previous work , which was focused on the special case of . the assumption ( h2 )is standard in the theory of viscosity solutions .we now give a comparison principle for hamiltonians satisfying ( h1 ) and ( h2 ) .[ thm : comp ] suppose that satisfies ( h1 ) and ( h2 ) .let be a viscosity solution of let be a monotone viscosity solution of where , and suppose that on .then on .the proof of theorem [ thm : comp ] is based on the auxiliary function technique , which is standard in the theory of viscosity solutions , with modifications to incorporate the lack of compactness resulting from the unbounded domain .a standard technique for dealing with unbounded domains is to assume the hamiltonian is uniformly continuous in the gradient and modify the auxiliary function ( see , for example ( * ? ? ?* theorem 3.5 ) ) .since ( p ) is not uniformly continuous in the gradient , we can not use this technique . in our previous work , we included an additional boundary condition at infinity to induce compactness .it turns out that this is not necessary , and in the proof of theorem [ thm : comp ] , we instead heavily exploit the structure of the hamiltonian , namely ( h1 ) , to produce the required compactness .since is monotone ( i.e. , non - decreasing ) , it is bounded below by . without loss of generalitywe may assume that .let and set .it follows from ( h1 ) that is a viscosity solution of .assume by way of contradiction that .let be a function satisfying for set , and choose large enough so that since is and , it is a standard application of the chain rule to show that is a viscosity solution of since ] defined in .let be a viscosity solution of we say that is _ truncatable _ if for every , the -truncation is a viscosity solution of .this notion of truncatability is in spirit the same as ( * ? ? ?* definition 2.7 ) , though the exact definition is slightly different for notational convenience .we first show that the value function is truncatable .[ prop : utrunc ] suppose that are borel - measurable and bounded .let and define for .if is continuous then is a truncatable viscosity solution of it follows from theorem [ thm : hjb ] that is a viscosity solution of .we need only show that is truncatable .let , let denote the characteristic function of ] .let ] , let denote the remaining portion of , and reparametrize and so that \to{\mathbb{r}}^2 ] we have since and ] .therefore we have .since is arbitrary , we see that , the -truncation of .since is continuous , it follows from theorem [ thm : hjb ] that is a viscosity solution of since and is monotone decreasing , it follows that is viscosity subsolution of , which completes the proof .we now show that truncatability enjoys a useful -stability property .[ prop : trunc ] let and for each suppose that is a truncatable viscosity solution of if locally uniformly , for some , then is a truncatable viscosity solution of where we should note that the operation defining is taken jointly as and .this is a standard operation in the theory of viscosity solutions ( see ( * ? ? ?* section 6 ) ) , and it can be written more precisely for a function as it is a standard result ( see ( * ? ? ?* remark 6.3 ) ) that is a viscosity solution of . to see that is truncatable : fix , let be the -truncation of , and let be the -truncation of .since is truncatable , we have that is a viscosity solution of for every .furthermore , we have locally uniformly , and therefore is a viscosity solution of .thus is truncatable .we now relax ( h2 ) and allow to have discontinuous spatial dependence .given a set we assume satisfies * there exists a modulus of continuity such that for all there exists and such that for all , , , and with .this hypothesis is similar to one used by deckelnick and elliott to prove uniqueness of viscosity solutions to eikonal - type hamilton - jacobi equations with discontinuous spatial dependence .it is also a generalization of the cone condition used in our previous work .if we assume the subsolution is truncatable , then we can prove the following comparison principle , which holds for hamiltonians with discontinuous spatial dependence .[ thm : comp - trunc ] suppose that satisfies ( h3) for some .let be a truncatable viscosity solution of and let be a monotone viscosity solution of .suppose that on .then on .the proof of theorem [ thm : comp - trunc ] is similar to ( * ? ? ?* theorem 2.8 ) , so we postpone it to the appendix .for the remainder of the section we set our aim now is to apply the comparison principles from theorems [ thm : comp ] and [ thm : comp - trunc ] to obtain a comparison principle , and a perturbation result , for the hamilton - jacobi equation ( p ) .first we need to show that ( h2 ) and ( h3) are satisfied by given in .[ prop : hcont ] suppose that , and let be given by . then for any let , and set so that suppose first that . since is convex , we have since we have therefore we have if we have , and hence holds .[ rem : h2 ] it follows from proposition [ prop : hcont ] that satisfies ( h2 ) if and are globally lipschitz continuous on .[ cor : comp ] suppose that and are non - negative and globally lipschitz continuous on .let be a viscosity solution of and let be a monotone viscosity solution of furthermore , suppose that then on implies on .we claim that in the viscosity sense . to see this ,let and let .then we have if , then we must have as desired .if , then by we have , and we have by virtue of the monotonicity of .let and set . by and we see that is a viscosity solution of by proposition [ prop : hcont ] and remark [ rem : h2 ] we see that ( h1 ) and ( h2 ) are satisfied. therefore we can apply theorem [ thm : comp ] to find that .sending completes the proof .recall that and are not independent functions in the dlpp problem , even though we have treated them as such for much of the analysis . from this point on, we will need to recall their relationship , as it is important for proving uniqueness in ( p ) . specifically, we need to assume that and satisfy ( f3 ) for the _ same _ choice of at each .when this holds , we say that and _ simultaneously _ satisfy ( f3 ) . since for exponential dlpp and for geometric dlpp , is always a monotone increasing function of , and hence and simultaneously satisfy ( f3 ) in both cases .we recall that , , , and ( f1)(f3 ) are defined in section [ sec : results ] , and that on .[ prop : hypo ] let and simultaneously satisfy ( f1 ) and ( f3 ) . then given by satisfies ( h3) with .let .if , then we can choose small enough so that .by proposition [ prop : hcont ] we see that any choice for will suffice since and are lipschitz with constant when restricted to .if for some , then let be as given in ( f3 ) .assume for now that , and set .let be less than half the value of from ( f3 ) , and then choose smaller , if necessary , so that has an empty intersection with and all other , and .let and denote the lipschitz extensions of and to , respectively , and make the same definitions for and .then ( f3 ) implies that and on .furthermore , since and are upper semicontinuous , we have and on .let , , , and with .if , then since is monotone , , and , we must have that .since and are lipschitz on , we can invoke proposition [ prop : hcont ] to show that ( h3) holds .now suppose that .if , then ( h3) holds as before , so assume that .let such that .then we have where we used the fact that on .we have an identical estimate for , and the proof is completed by invoking proposition [ prop : hcont ] .[ cor : comp - trunc ]let and simultaneously satisfy ( f1 ) and ( f3 ) .let be a truncatable viscosity solution of , let be a monotone viscosity solution of , and suppose that holds .then on implies on .the proof of corollary [ cor : comp - trunc ] is similar to corollary [ cor : comp ] .we now prove an important perturbation result . roughly speaking, it says that if we smooth out the macroscopic mean and variance ( i.e. , remove the discontinuities ) , then the resulting change in the value function is uniformly small .this result is used in the proof of our main result , theorem [ thm : main ] .the proof relies on the uniqueness of truncatable viscosity solutions of ( p ) ( theorem [ thm : comp - trunc ] and corollary [ cor : comp - trunc ] ) , and the result can then be used to prove a comparison principle for ( p ) without the truncatability assumption ( see theorem [ thm : final - comp ] ) .[ thm : perturbation ] let and satisfy and simultaneously satisfy ( f1 ) , ( f3 ) .let satisfy ( f1 * ) with .furthermore suppose that and for all .then for every we have for simplicity , let us set and for . since , we can apply theorem [ thm : reg ] with to find that is continuous on . we can apply theorem [ thm : reg ] again with to show that for every , there exists and a modulus of continuity such that \times[z_2,r] ] has a finite number of intersections with .it follows that and hence , which establishes the claim . by proposition [ prop : hypo ] , given by satisfies ( h3) for . by ( f1 * ) and we have for , and hence for .similarly , we have that for .it follows that on , and by applying a translated form of corollary [ cor : comp - trunc ] to find that on .[ rem : conv ] sequences generated by inf- and sup - convolutions of and satisfy the hypotheses of theorem [ thm : perturbation ] . recall that the sup - convolution of is defined by and the inf - convolution by .[ cor : perturbation ] let and simultaneously satisfy ( f1 ) , ( f3 ) and , let satisfy ( f1 * ) with , and let satisfy ( f2 ) .if hold for all then fix . by proposition [ prop : dpp ] we have and arguing by symmetry , it follows from theorem [ thm : perturbation ] that .\ ] ] it follows from and a similar argument as in theorem [ thm : perturbation ] that for any . by the arzel - ascoli theoremwe find that \cap \partial { \mathbb{r}}^2_+.\ ] ] combining , we have that .locally uniform convergence follows again from the arzel - ascoli theorem .[ thm : final - comp ] let and simultaneously satisfy ( f1 ) , ( f3 ) and , and let satisfy ( f2 ) .let be a viscosity solution of and let be a monotone viscosity solution of .then if on , where is given in the statement of theorem [ thm : main ] , then on .let and be the sup- and inf - convolutions of and as defined in ( see remark [ rem : conv ] ) , respectively . to simplify notation ,let us write , , and .by definition we have , and by corollary [ cor : perturbation ] and remark [ rem : conv ] we have locally uniformly on as .since and we have that is a viscosity solution of by theorem [ thm : hjb ] , is a viscosity solution of furthermore , we have on where .since and are globally lipschitz we can apply corollary [ cor : comp ] to obtain .sending we have . by a similar argumentwe can prove that , which completes the proof .in this section we give the proof of our main result , theorem [ thm : main ] .we first have a preliminary convergence result on the interior , which we later adapt to account for the boundary source .for we define where and is defined in .[ lem : prelim - conv ] assume satisfies ( f1 ) and ( f3 ) .suppose that the weights satisfy and are either all exponential , or all geometric random variables , consructed as in section [ sec : results ] .in the exponential case , set , and in the geometric case , set .then for every we have ,\ ] ] with probability one .let .let and be the sup- and inf - convolutions of , defined in ( see remark [ rem : conv ] ) . in the exponential case , set and , and in the geometric case , set and . to simplify notation , let us also set , , and , and note that .notice that by the definition of , we have that holds for both the exponential and geometric cases. we can therefore invoke theorem [ thm : perturbation ] to find that .\ ] ] let . in the exponential case , for let be independent and exponentially distributed with parameter , and let be independent and exponentially distributed with parameter . in the geometric case , for let be independent and geometrically distributed with parameter , and let be independent and geometrically distributed with parameter . in either case ,set and set we can define and on the same probability space as in such a way that for all with probability one .we therefore have with probability one. since and are continuous on , we can invoke theorem ( * ? ? ?* theorem 1 ) to find that with probability one , for fixed ] that uniform convergence follows from the fact that and are monotone decreasing and is uniformly continuous on ] given by , where , and let .set } \nu ] .then we have that where the last inequality follows from the monotonicity of .fix and let . then are _ i.i.d ._ , and the polynomial growth restriction on guarantees that the moments of are finite .we therefore have by the law of large numbers that with probability one as .similarly , we have with probability one as .it follows that with probability one as .since the above holds for every , we have from that with probability one . by the assumptions on and , is continuous except possibly at points of discontinuity of , which are locally finite .thus is riemann integrable , and taking we have with probability one .the proof of the analogous inequality is similar .we now have the proof of theorem [ thm : main ] .let , and suppose that . if , then with probability one as .if then we have it follows from lemma [ lem : slln ] and the construction of the weights in section [ sec : results ] that with probability one as .the case where and is similar . as in lemma[ lem : prelim - conv ] , we can use the fact that and are monotone non - decreasing , and is uniformly continuous , to show that we actually have locally uniformly on with probability one .let . from the definition of we have the following dynamic programming principle combining lemma [ lem : prelim - conv ] , proposition [ prop : dpp ] , and, we can pass to the limit in to obtain with probability one .as in lemma [ lem : prelim - conv ] , locally uniform convergence follows from the monotonicity of and , along with the uniform continuity given by theorem [ thm : reg ] .we present here a fast numerical scheme for computing the viscosity solution of ( p ) .the scheme is a minor modification of the scheme used in . since information propagates along coordinate axes in the definition of the variational problem for , it is natural to consider using backward difference quotients to approximate ( p ) . letting denote the numerical solution on the grid of spacing , we have where and . given and , we can solve for via the quadratic formula to obtain for .the choice of the positive root in reflects the monotonicity of the scheme , and ensures that it captures the viscosity solution of ( p ) . when or , we recall the boundary condition to obtain notice that when , if we set and in , then and are equivalent . in fact , even when , and are asymptotically equivalent as provided .the same observations hold when if we set .thus , to account for the boundary condition in ( p ) , we can simply set and compute via for all ^ 2 ] , .first note that for . therefore , by , we have that on ^ 2\cap \partial { \mathbb{r}}^2_+ ] such that note that by the concavity of we have that it follows that by monotonicity of we therefore have this contradicts , hence .the proof is completed by invoking ( * ? ? ?* theorem 2.1 ) .we now extend the numerical convergence result to satisfying ( f1 ) and ( f3 ) .[ cor : final - conv ] suppose that and simultaneously satisfy ( f1 ) , ( f3 ) and , and let satisfy ( f2 ) .define as in theorem [ thm : lip - conv ] .then we have where is the unique monotone viscosity solution of ( p ) .define and as in the proof of theorem [ thm : final - comp ] .by definition we have , and by corollary [ cor : perturbation ] and remark [ rem : conv ] we have locally uniformly on as .let and denote the numerical solutions defined by ( s ) for and , respectively , extended to as in theorem [ thm : lip - conv ] .since , and are lipschitz continuous and satisfies ( f2 ) , we can apply theorem [ thm : lip - conv ] to show that locally uniformly on as . since and , we can make an argument , as in theorem [ thm : lip - conv ] , based on a comparison principle for ( s ) , to show that for all . the proof is completed by combining this with and the locally uniform convergence .we present here some numerical simulations comparing the numerical solutions of ( p ) , computed by ( s ) , to realizations of directed last passage percolation ( dlpp ) .we restrict our attention to the box ^ 2 ] is the maximizing argument in and .the algorithm is terminated as soon as and we append the final terminal point . in, we set whenever .the algorithm is summarized in algorithm [ alg : max - curve ] .given a step size and , we generate as follows : + ; + ; notice that the boundary source does not appear explicitly in algorithm [ alg : max - curve ] , though it does appear implicitly through the solution of ( p ) .each step of the algorithm moves a distance of at least in the direction or . if ^ 2 ] with constant , and suppose that satisfies ( f2 ) .let , ^ 2 ] be the monotone polygonal curve passing through .then there exists a constant such that for convenience , we set , , and we extend , and to functions on by setting for . writing and for , we can parameterize so that for and .it follows that an application of hlder s inequality gives for and any with and . combining this with the dynamic programming principle we have } \left\ { u(y(s ) ) + 2\sigma(x_{k - j}){\varepsilon}\sqrt{s(1-s ) } ) \right\}+ 3c_{lip } { \varepsilon}^2,\ ] ] for all . by the definition of we have for . by iterating this inequality for have we have two cases now .suppose first that .then and by we have that . combining this with we have the proof is completed by noting that .suppose now that . then holds for and combining this with we have since we must have .it follows that inserting this into we see that if and are not globally lipschitz continuous , then algorithm [ alg : max - curve ] is not guaranteed to yield optimal curves .however , it can be easily modified to give an algorithm that does .[ cor : max - curve2 ] suppose that and simultaneously satisfy ( f1 ) , ( f3 ) and , and let satisfy ( f2 ) .let and be sequences of functions such that and are lipschitz with constant , , and locally uniformly .let ^ 2 ] be the monotone polygonal curve generated by applying algorithm [ alg : max - curve ] to and with .then we have let us set , , , and . by theorem [ thm : max - curve ] there exists a constant such that it follows that we now show some simulation results using algorithm [ alg : max - curve ] to compute approximately optimal curves for the exponential / geometric dlpp simulations presented in section [ sec : sim ] .figure [ fig : demo2 ] shows the curves generated by algorithm [ alg : max - curve ] along with optimal paths for realizations of dlpp on a grid .we also show the level sets of the numerical solutions of ( p ) to give points of reference . in all cases , we used a step size of and computed in algorithm [ alg : max - curve ] by an exhaustive search with a grid size of . with these choices of parameters ,algorithm [ alg : max - curve ] runs in approximately a quarter of a second , assuming the numerical solution is already available .note also that we implemented algorithm [ alg : max - curve ] exactly as written , even when and are discontinuous , and do not substitute continuous versions as in corollary [ cor : max - curve2 ] . as in , it is expected that the optimal paths for dlpp will asymptotically concentrate around optimal curves for the variational problem , and this is clearly reflected in the simulations in figure [ fig : demo2 ] .notice that for exponential dlpp with means and geometric dlpp with parameter , there are multiple maximizing curves for any terminal point along the diagonal .we see that some of the dlpp realizations concentrate around one optimal path , while the remaining realizations concentrate around the other .algorithm [ alg : max - curve ] will of course only find one of the maximizing curves , depending on the choice one makes when there are multiple maximizing arguments in the definition of .we now show some simulations with a source term . herewe consider exponential dlpp with mean on \times ( 0,1] ] .figure [ fig : sources_demo1 ] shows the optimal curve generated by algorithm [ alg : max - curve ] , along with the level sets of the numerical solution of ( p ) and the optimal paths from 10 realizations of exponential dlpp on a grid .although our assumptions only allow sources on the boundary , many of the results in the paper can be shown to hold for sources along horizontal or vertical lines in . the idea is to find the appropriate dynamic programming principle that plays the role of proposition [ prop : dpp ] , so that the effect of the weights in the bulk is separated from the source . in the case of a source along the line for , and assuming no boundary sources ,the dynamic programming principle would be where , , and are , say , lipschitz on , and represents the source , which is nonzero only on the line .we can then use this dynamic programming principle and its discrete version ( similar to ) in the proof of theorem [ thm : main ] .the one caveat is that is in general discontinuous along the line containing the source , though remains locally uniformly continuous on each of the components of obtained by removing the source line .thus , can only be identified via the variational problem , since we have not proven uniqueness of discontinuous viscosity solutions of ( p ) . however , our numerical results suggest that either uniqueness holds for ( p ) in some special cases where is discontinuous , or at the very least our numerical scheme for ( p ) selects the `` correct '' viscosity solution for the percolation problem .figure [ fig : sources_demo2 ] , [ fig : sources_demo3 ] , and [ fig : sources_demo4 ] show the optimal curve generated by algorithm [ alg : max - curve ] , along with dlpp simulations for sources on the horizontal lines , , and , respectively . finally , we consider the totally asymmetric simple exclusion process ( tasep ) with a slow bond rate at the origin .this model was originally introduced by janowsky and lebowitz , and some partial results were obtained more recently by sepplinen .the process of interest is the usual tasep with exponential rates of at all locations in except for the origin , which has a slower rate of $ ] .one can think of this as modeling traffic flow on a road with a single toll both that every car must pass through . through the correspondence with dlpp , the slow bond rate corresponds to a source on the diagonal . in the context of our paper, we would have notice that does not satisfy the assumptions of theorem [ thm : main ] , and we do not expect the continuum limit ( p ) to hold in this case .a quantity of interest is which corresponds to the reciprocal of the maximum tasep current .it is known that and sepplinen proved the following bounds : it is an open problem to determine for . in particular ,one is interested in whether for all , or if there are some values of close to for which the inverse current remains unchanged . even though we do not expect our continuum limit hamilton - jacobi equation to hold for the slow bond rate problem , it is nevertheless interesting to see what our results would say about this open problem were they to hold .it is easy to see that for given by .indeed , one can see that the optimal curve in the variational problem must lie on the diagonal , which gives the energy .this would suggest that notice that this violates the bounds in , which indicates that the hamilton - jacobi equation continuum limit ( theorem [ thm : main ] ) does _ not _ hold for sources along diagonal lines .it has recently come to our attention that the slow bond rate problem has been setteled by basu , sidoravicius , and sly .they show that the inverse current is _ always _ affected when , but do not give an explicit formula for .in this work , we identified a hamilton - jacobi equation for the continuum limit of a macroscopic two - sided directed last passage percolation ( dlpp ) problem .we rigorously proved the continuum limit when the macroscopic rates are discontinuous .furthermore , we presented a numerical scheme for solving the hamilton - jacobi equation , and an algorithm for finding optimal curves based on a dynamic programming principle .below we make some remarks , discuss simple extensions of this work , and ideas for future work . * * regularity of : * there are many simple modifications of ( f1 ) under which one can prove theorem [ thm : main ] . for example , the existence of the set bounded by the strictly decreasing curve and on which is not necessary , and one can check that the proofs hold without this assumption .this would correspond to a tasep model with step initial condition .the curves on which and may admit discontinuities can all be chosen to be strictly decreasing instead of increasing , with appropriate modifications in the proofs .in fact , we can even allow the curves to switch from strictly increasing to strictly decreasing , provided the critical point is isolated , and we make an additional cone condition assumption at this point . however , the curves can not have any positive measure flat regions , as this can induce discontinuities in , as shown in remark [ rem : discont ] . ** discontinuous viscosity solutions : * the regularity assumption ( f1 ) was chosen to ensure that is locally uniformly continuous .this is essential for invoking the arzel - ascoli theorem in the proof of theorem [ thm : perturbation ] , and in the proof of the comparison principle for ( p ) ( theorem [ thm : comp - trunc ] ) .we believe that theorem [ thm : main ] holds under far more general assumptions on , allowing to be discontinuous .presently , we do not know how to prove this .the largest obstacle seems to be proving uniqueness of viscosity solutions of ( p ) when the solutions and the macroscopic weights are discontinuous .our numerical results seem to support this conjecture , as the numerical scheme is able to very accurately capture discontinuities in . ** hydrodynamic limit of tasep : * as we showed in section [ sec : formal - eq ] , the hamilton - jacobi equation ( p ) is formally equivalent to the conservation law governing the hydrodynamic limit of tasep . it would be very interesting to make this connection rigorous .* * higher dimensions : * the main obstacle in generalizing the hamilton - jacobi equation ( p ) , and the results in this paper , to dimensions , is the fact that the exact form of the time constant for _ i.i.d . _ random variables is unknown . if an exact form for the time constant were to be discovered for , then we anticipate no problems in generalizing the results in this paper to higher dimensions .we should note that although the exact form of is unknown for , it is known that is continuous , 1-homogeneous , symmetric in all variables , and superadditive , under fairly broad assumptions on the distribution of .this is enough to show that is the viscosity solution of some hamilton - jacobi equation , but the explicit form of the equation is unknown .the author would like to thank jinho baik for suggesting the problem and for stimulating discussions .the author would also like to thank the anonymous referee whose comments and suggestions have greatly improved this manuscript .for completeness we give the proof of theorem [ thm : comp - trunc ] here .the proof is similar to ( * ? ? ?* theorem 2.8 ) .suppose that let where since , we have by hypothesis that on .therefore , since and are continuous we have . by there exists that for set and let denote the -truncation of . by and we see that by we have , and hence and be as given in ( h3) .choose small enough , and smaller if necessary , so that . for we claim that for large enough , there exists such that to see this , first substitute into to find for any such that .since and are continuous , it follows from that since is bounded , and is monotone , we have by that latexmath:[\[\label{eq : xy - close } let such that .set and , and define a short calculation shows that .since we have since is pareto - monotone and we have by that since , we see from and that furthermore , by we have similarly we have .since we have it follows from this and that for every there exists such that . bywe may pass to a subsequence if necessary to find such that as .then we have combining this with we have and since , it follows from the definition of that .therefore , for large enough we have , which establishes the claim .letting we have by that . by hypothesiswe have since is truncatable , is a viscosity solution of and therefore subtracting from we have let and note that where by we have .therefore , for large enough we have and .since we can invoke ( h3) to find that note that combining this with and we have sending yields a contradiction .
we prove that a directed last passage percolation model with discontinuous macroscopic ( non - random ) inhomogeneities has a continuum limit that corresponds to solving a hamilton - jacobi equation in the viscosity sense . this hamilton - jacobi equation is closely related to the conservation law for the hydrodynamic limit of the totally asymmetric simple exclusion process . we also prove convergence of a numerical scheme for the hamilton - jacobi equation and present an algorithm based on dynamic programming for finding the asymptotic shapes of maximal directed paths .
[ intro ] the technique of _ chaining _ is applicable in many situations . a simple case is e.g. , when we want to calculate the partial sums ( resp . products ) of a ( not necessarily bounded ) list of integers , with a given ` base ' integer ; such a list of partial sums ( resp .products ) can be calculated , incrementally , with the help of the following two equations : where is the empty list , is the given base integer , is an integer variable , and is the given list of integers .the partial sums ( resp .products ) are returned as a list , by evaluating the function , when is interpreted as the sum ( resp .product ) of with the given base integer .a more sophisticated example is the cipher block chaining encryption mode ( cbc , in short ) , employed in cryptography , a mode which uses the ac - operator exclusive - or ( xor ) for ` chaining the ciphers across the message blocks ' ; here is how this is done : let stand for xor ( which we let distribute over block concatenation ) , and let be a message given as a list of ` plaintext ' message subblocks .then the encryption of , with any given public key and an initialization vector , is defined as the list of ciphertext message subblocks , where : , and , for any .( note : it is usual in cryptography to see a message as a sequence of `` records '' , each record being decomposed into a sequence of blocks of the same size ; what we refer to as ` message ' in this paper , would then correspond to a ` record ' in the sense of cryptography . )the above set of equations also models this cbc encryption mode : for this , we interpret the function as the encryption of any single block message , xor - ed with the initialization vector , using the given public key .under such a vision , a message is decomposed as the concatenation of its first message block with the rest of the message list , i.e. , we write ; then , the encryption of with any given public key , with taken as initialization vector ( iv ) , is derived by . actually , our interest in the equational theory defined by the above two equations was motivated by the possibility of such a modeling for cipher block chaining , and the fact that rewrite as well as unification techniques are often employable , with success , for the formal analysis of cryptographic protocols ( cf .e.g. , , and also the concluding section ) .this paper is organized as follows . in section [ prelim ]we introduce our notation and the basic notions used in the sequel ; we shall observe , in particular , that the two equations above can be turned into rewrite rules and form a convergent rewrite system over a 2-sorted signature : _ lists _ and _ elements_. our concern in section [ bc - inf ] is the unification problem modulo this rewrite system , that we denote by ; we present a 2-level inference system ( corresponding , in a way , to the two sorts of the signature ) for solving this problem .although our main aim is to investigate the unification problem for the case where is an interpreted function symbol ( as in the two situations illustrated above ) , we shall also be considering the case where is a free uninterpreted symbol .the soundness and completeness of our inference procedure are established in section [ method ] . while the complexity of the unification problem is polynomial over the size of the problem when is uninterpreted , it turns out to be np - complete when is interpreted so that the rewrite system models cbc encryption .we then present , in section [ dbc ] , a 2-sorted convergent system that fully models at an abstract level , a block chaining cipher - decipher mode without using any ac - operators ; this is done by adding a couple of equations to the above two : one for specifying a left - inverse for ( does the deciphering ) , and the other for specifying the block chaining mode for deciphering .a 2-level inference procedure extending the one given in section [ bc - inf ] is presented , and is shown to be sound and complete for unification modulo this extended system ; unification modulo also turns out to be np - complete . in the concluding section we briefly evoke possible lines of future work over these systems and .the first part of this paper , devoted to unification modulo , is a more detailed version of the work we presented at lata 2012 ( ) .we consider a ranked signature , with two _ disjoint _ sorts : and , consisting of binary functions _bc , cons , h _ , and a constant , and typed as follows : , , , .we also assume given a set of countably many variables ; the objects of our study are the ( well - typed ) terms of the algebra ; terms of the type will be referred to as _ elements _ ; and those of the type as _ lists_.it is assumed that the only constant of type list is ; the other constants , if any , will all be of the type element . for better readability, the set of variables will be divided into two subsets : those to which ` lists ' can get assigned will be denoted with upper - case letters as : , with possible suffixes or primes ; these will be said to be variables of type ; variables to which ` elements ' can get assigned will be denoted with lower - case letters , as : , with possible suffixes or primes ; these will be said to be variables of type .the theory we shall be studying first in this paper is defined by the two axioms ( equations ) already mentioned in the introduction : it is easy to see that these axioms can both be oriented left - to - right under a suitable _ lexicographic path ordering ( lpo ) _e.g. , ) , and that they form then a convergent i.e. , confluent and terminating 2-sorted rewrite system .as mentioned in the previous section , we consider two theories that contain the above two axioms .the first is where these are the _ _ only axioms ; we call that theory . the other theory is where is interpreted as for cbc , i.e. , where where is exclusive - or and is encryption using some ( fixed ) given key .this theory will be referred to as .we use the phrases `` -unification '' and `` unification modulo '' to refer to unification problems modulo both the theories , collectively .note that in the case where is a free uninterpreted symbol ( i.e. , ) is fully cancellative in the sense that for any terms , if and only if and .but when is interpreted for cbc , this is no longer true ; in such a case , will be only _ semi - cancellative _ , in the sense that for all terms , the following holds : is right - cancellative : if and only if , and is also left - cancellative : if and only if .thus , in the sequel , when we look for the unifiability of any set of element equations modulo ( resp .modulo ) the cancellativity of ( resp .the semi - cancellativity of ) will be used as needed , in general without any explicit mention . our concern in this section , and the one following , is the equational unification problems modulo and .we assume without loss of generality ( wlog ) that any given -unification problem is in _ standard form , _i.e. , is given as a set of equations , each having one of the following forms : + where stands for any ground constant of sort .the first four kinds of equations the ones with a list - variable on the left - hand side are called _ list - equations , _ and the rest ( those which have an element - variable on the left - hand side ) are called _ element - equations . _ for any problem in standard form , will denote the subset formed of its list - equations , and the subset of element - equations .a set of element - equations is said to be in _ dag - solved form _ ( or _ d - solved form _ ) ( ) if and only if they can be arranged as a list , such that : : and are distinct variables , and does not occur in nor in any .such a notion is naturally extended to sets of list - equations as well . in the next sectionwe give an inference system for solving any -unification problem in standard form .for any given problem , its rules will transform into one in -solved form .the element - equations at that point can be passed on to an algorithm for solving them thus in the case of what we need is an algorithm for solving the _ general _ unification problem modulo the theory of exclusive - or .any development presented below without further precision on is meant as one which will be valid for both and .[ bc - inf ] the inference rules have to consider two kinds of equations : the rules for the _ list - equations _ in , i.e. , equations whose left - hand sides ( lhs ) are variables of type , and the rules for the _ element - equations , _ i.e. , equations whose lhs are variables of type . our method of solving any given unification problem will be ` modular ' on these two sets of equations : the list - inference rules will be shown to terminate under suitable conditions , and then all we will need to do is to solve the resulting set of element - equations for . a few technical points need to be mentioned before we formulate our inference rules .note first that it is not hard to see that is cancellative ; by this we mean that , for terms , if and only if and .on the other hand , it can be shown by structural induction ( and the semi - cancellativity of ) that is _ conditionally _ semi - cancellative , depending on whether its first argument is or not ; for details , see _appendix-1_. this property of will be assumed in the sequel .the inference rules given below will have to account for cases where an ` occur - check ' succeeds on some list - variable , and the problem will be unsolvable .the simplest among such cases is when we have an equation of the form in the problem .but one could have more complex unsolvable cases , where the equations involve both and ; e.g. , when contains equations of the form : ; the problem will be unsolvable in such a case : indeed , from the axioms of , one deduces that must be of the form , for some and , then must be of the form , and subsequently , and we are back to a set of equations of the same format .we need to infer failure in all such cases . with that purpose ,we define the following relations on the list - variables of the equations in : * iff , for some . * iff there is an equation * iff , or , for some .note that is the symmetric closure of the relation ; its reflexive , symmetric and transitive closure is denoted as .the transitive closure of is denoted as ; and its reflexive transitive closure as .note , on the other hand , that is solvable by the substitution ; in fact this equation forces to be , as would also a set of equations of the form .such cycles ( as well as some others ) have to be checked to determine whether a list - variable is forced to be .this can be effectively done with the help of the relations defined above on the type variables .we define , recursively , a set * nonnil * of the list - variables of that can not be for any unifying substitution , as follows : * if is an equation in , then . * if is an equation in , then if and only if .we have then the following obvious result : [ nonnilvar ] a variable if and only if there are variables and such that and . some of the inference rules below will refer to a graph whose nodes are the list - variables of the given problem , ` considered equivalent up to equality ' ; more formally : for any list - variable of , we denote by ] any relation defined over the list - variables of is then extended naturally to these equivalence classes , by setting : , \dotsc , [ u_n ] ) \ ; ~ \mathrm{iff}~ \ ; \exists v_1 \in [ u_1 ] \ , \dotso \ ,\exists v_n \in [ u_n ] \colon \mathcal{r}(v_1 , \dotsc , v_n) ] on there is a _ directed _ arc to a ( not necessarily different ) node ] and ] on is said to be a -peak if contains two different equations of the form ; the node ] iff there is a path on from ] , at least one arc of which has label . in other words , a list - variable of is said to _ violate occur - check _iff \succ_l [ u] ] ( l2 ) _ cancellation on _ : ( l3.a ) _ nil solution-1 _ : ( l3.b ) _nil solution-2 _ : ( l3.c ) _nil solution-3 _ : { { \mathcal{eq}}~ \cup ~ \ { u = _ { } ^ ? nil , \ ; v = _ { } ^ ? nil \ } } { { \mathcal{eq}}~ \uplus ~ \ { u = _ { } ^ ?bc(v , x ) \}} ] if ( l5 ) _ splitting _ , at a -peak : ( l6 ) _ occur - check violation _ : + } \succ_l { [ } u { ] } ~ \mathrm{~ on ~ the ~ graph ~ } g_l ] { fail } { { \mathcal{eq}}} ] ; and that would have caused the inference process to terminate with fail , as soon as both the variables and appear in the problem derived under the inferences .termination of ( l4.b ) can now be proved as follows : the number of -equivalence classes may increase by 1 with each application of ( l4.b ) , but the number of -equivalence classes remains the same , for the same reason as above .let be the number of -equations in the input problem and let be the number of variables in the input problem .we then show that the total number of applications of ( l4.b ) and ( l5 ) can not exceed : indeed , whenever one of ( l4.b ) or ( l5 ) is applied , some number of -equations are removed and an equal or lesser number are added ,whose variables belong to -equivalence classes at a ` lower level ' as explained above , i.e. , below some steps . there are at most such equivalence classes , since the number of equivalence classes does not increase ( and there can not be more than such equivalence classes , to start with ) .so a -equation can not be `` pushed down '' more than times .since there are initially -equations , the total number of applications of ( l4.b ) and ( l5 ) can not exceed .a set of equations will be said to be _l - reduced _ if none of the above inference rules ( l1 ) through ( l7 ) is applicable .( note : such a problem may not be in -solved form : an easy example is given a couple of paragraphs below . )* unification modulo : * the rules ( l1 ) through ( l7 ) are not enough to show the existence of a unifier modulo .the subset of element - equations , , may not be solvable ; for example , the presence of an element - equation of the form should lead to failure .however , we have the following : [ list - result ] if is in l - reduced form , then is unifiable modulo if and only if the set of its element - equations is solvable .if is -reduced , then setting every list - variable that is not in * nonnil * to will lead to _ a unifier _ for , modulo , provided is solvable .recall that is the theory defined by when is uninterpreted .[ list - poly ] let be any -unification problem , given in standard form .unifiability of modulo is decidable in polynomial time ( wrt the size of ) . if the inferences of applied to lead to failure, then is not unifiable modulo ; so assume that this is not the case , and replace by an equivalent problem which is -reduced , deduced in polynomially many steps by proposition [ list - unifiable ] . by proposition [ list - result ] , the unifiability modulo of such a amounts to checking if the set of its element - equations is solvable .we are in the case where is uninterpreted , so to solve we apply the rules for standard unification , and check for their termination without failure ; this can be done in polynomial time .( in this case , is fully cancellative . )it can be seen that while termination of the above inference rules guarantees the _ existence _ of a unifier ( provided the element equations are syntactically solvable ) , the resulting -reduced system may not lead directly to a unifier .for instance , the -reduced system of list - equations is unifiable , with the following two incomparable unifiers : to get a complete set of unifiers we need three more inference rules , which are `` dont - know '' nondeterministic , to be applied only to -reduced systems : ( l8 ) _ nil - solution - branch for _ , at a -peak : ( l9 ) _ guess a non - nil branch for _ , at a -peak : & u ' = _ { } ^ ?bc(z , u ) , \ ; u = _ { } ^ ?h(v , x ) , \ ; u = _ { } ^ ?h(w , y ) \} \end{aligned } } { { \mathcal{eq}}~ \uplus ~ \ { u = _ { } ^ ?bc(v , x ) , \ ; u = _ { } ^ ?bc(w , y ) \}} ] .+ here stands for xor and is the initialization vector ( ) agreed upon between and .but then , some other agent , entitled to open a session with with initialization vector , can get hold of the first encrypted block ( namely : ) as well as the second encrypted block of what sent to , namely ; ( s)he can then send the following as a ` bona fide ' message to : \ , ] ] .it is clear now , that the intruder can get hold of the message intended to remain secret for him / her : by decrypting the second block of the ( encrypted part of the ) message received from , ( s)he first deduces : ; by xor - ing this with the first block of the message , ( s)he obtains : ; from which ( s)he can deduce by xor - ing with and , both of which are known to him / her ( the latter of these two terms is the first block of the message from to , that ( s)he has intercepted ) .[ ping ] the above attack ( which exploits the properties of xor : ) can be modeled as solving a certain -unification problem .we assume that the names , as well as the initialization vector , are constants accessible to .the message and the initialization vector , that and have agreed upon , are constants intended to be secret for .we shall interpret the function symbol of in terms of encryption with the public key of : i.e. , is .the protocol above can then be modeled as follows : we assume that the list of terms sends to , namely \,] ] ; ( s)he first recovers the namestamp of the sender , then checks that the second argument under in what ( s)he received is the agreed upon with ; subsequently ( s)he sends back the appropriate list of terms to , acknowledging receipt of the message .now , due to our cbc - assumption , the ground terms are both accessible to the intruder .so the attack by , mentioned above , corresponds to the fact that _ can _ send to the following list of terms : \,] ] , for the element - variable , i.e. , needs to solve the element - equation : ; since is interpreted here so that models , ( s)he can do so by setting : ; and that precisely leads to the attack .[ r:1 ] ( i ) the above analysis does _ not _ go through if the namestamp forms the _ second block _ of the encrypted part of the messages sent .in such a case , the protocol is ` leak - proof ' even under cbc , provided we assume that an iv for a message is a secret to be shared only by the sender and the intended recipient of the message , and that it is _ not _ transmitted as clear text or encrypted as an initial ` block number zero ' of the message body . actually , by reasoning as above ,one checks that the intruder in such a case can only get hold of , where is the ( secret ) iv that only and share .this in a sense is in accordance with , where the protocol was ` proved secure ' under such a specification .\(ii ) the considerations above lead us to conclude , implicitly , that in cryptographic protocols employing the cbc encryption mode , it is necessary to forbid free access to the ivs of the ` records ' of the ` messages ' sent , if information leak is to be avoided .this fact has been pointed out in the 90 s , by bellare et al ( ) , and again , in some detail , by k. g. paterson et al in ; both point out that tls 1.0 with its predictable ivs is inherently insecure . for more on this point , and on the relative advantages of tls 1.1 , tls 1.2 over tls 1.0 ,the reader can also consult , e.g. , http://www.educatedguesswork.org/2011/09/ ( note : keeping ivs as shared secrets alone may not always be sufficient in general , as is shown by example 2 above . )[ dbc ] in this section we extend the 2-sorted equational theory studied above , into one that fully models , in a simple manner and without using any ac - symbols , a ` generic ' block chaining encryption - decryption scheme . this theory , that we shall refer to as , is defined by the following set of ( 2-sorted ) equations : where is typed as and is typed as .all these equations can be oriented from left to right under a suitable reduction ordering , to form a convergent ( 2-sorted ) rewrite system .the equation says that is a left - inverse for ; it is actually an inductive consequence of the first five : i.e. , for any list - term and element - term both in ground normal form , reduces to under the first five , a fact that can be easily checked by structural induction , cf . _ appendix-2_. ( its insertion as an equational axiom is for technical reasons , as will be explained in _ remark [ r:4]_(ii ) below . ) a few words , by way of intended semantics in the context of cryptographic protocols , seem appropriate : would in such a context stand for the encryption with the public key of an intended recipient , of message , _ ` coupled ' in a sense to be defined , _ with as initialization vector ( iv ) ; and would be the decryption of with the private key of , to be then _` decoupled ' , again in a sense to be defined _ , with .if an agent wants to send a list of terms to recipient , ( s)he would send out where is the iv they have mutually agreed upon ; and would see it as the list of terms , from which ( s)he can retrieve the individual message terms by applying the last equation for in the system .this generic block chained encryption - decryption scheme is a natural abstraction of the usual ( xor - based ) cbc : it suffices to interpret the roles of and suitably , and define properly the meanings of ` coupling ' and ` decoupling ' , to get the usual cbc mode ; for that , one would _ define _ the ` coupling ' as well as ` decoupling ' of with as ; would then stand for , and would stand for , where is decryption with the private key of . if we go back to example [ ping ] based on the usual cbc , the encrypted part ofwhat sends out to ( with the notation employed there ) is the list of terms : ] . by applying the fifth equation in to this list of terms , under the assignments : ] ; i.e. , the list ] to ] and ] iff there is a directed path on from ] , at least one arc of which has label .we extend now the inference system of section [ inf - l ] by adding the following list - inferences ; these additional rules are essentially the -counterparts of the list - inferences of which only needed to consider .( there are several reasons why we have not worked with right from the start maybe the inference system would possibly have been more concise , if we had done so .a first reason is , that would have been at the expense of readability ; a second reason is that -unification is of interest on its own , especially for , as is shown by example [ ping ] above ; a third and conclusive reason is that the inference system we present below for -unification , actually reduces the problem to a problem of -unification . )we first formulate the `` dont - care '' nondeterministic inference rules .( db1.a ) _nil solution-1 for _ : : : ( db1.b ) _ nil solution-2 for _ : : : ( db1.c ) _ nil solution-3 for _ : : : { { \mathcal{eq}}~ \cup ~ \ { \ ; u = _ { } ^ ? nil , \; v = _ { } ^ ?nil \ ; \ } } { { \mathcal{eq}}~ \uplus ~ \ { \ ; u = _ { } ^ ?db(v , x ) \ ; \}}\ ] ] ( db2 ) _ left - cancellation on _ : : : { { \mathcal{eq}}~ \cup ~ \ { \ ; u = _ { } ^ ?db(v , y ) , \ ; x = _ { } ^ ? y \ ; \ } } { { \mathcal{eq}}~ \uplus ~ \ { \ ; u = _ { } ^ ?db(v , x ) , \ ; u = _ { } ^ ?db(v , y ) \ ; \}}\ ] ] ( db3.a ) _ push below , at a -peak _ : : : & u ' = _ { } ^ ?db(v ' , v ) , \ ; u ' = _ { } ^ ?db(w ' , w ) , \ ; u = _ { } ^ ?g(v , x ) , \ ; u = _ { } ^ ?g(w , y ) \ ; \ } \end{aligned } } { { \mathcal{eq}}~ \uplus ~ \ { \ ; u = _ { } ^ ?db(v , x ) , \ ; u = _ { } ^ ?db(w , y ) \ ; \}}\ ] ] + ( db3.b ) _ push and below at a -peak _ : : : & u ' = _ { } ^ ?bc(v ' , u ) , \ ; u ' = _ { } ^ ?db(w ' , w ) , \ ; u = _ { } ^ ?h(v , x ) , \ ; w = _ { } ^ ?h(u , y ) \ ; \ } \end{aligned } } { { \mathcal{eq}}~ \uplus ~ \{\ ; u = _ { } ^ ?bc(v , x ) , \ ; u = _ { } ^ ?db(w , y ) \;\}}\ ] ] + ( db4 ) _ splitting for at a -peak _ : : : ( db5 ) _ flip to conditionally : _ : : { { \mathcal{eq}}~ \cup ~ \ { v = _ { } ^ ?bc(u , x ) \ } } { { \mathcal{eq}}~ \uplus ~ \ { u = _ { } ^ ?db(v , x ) \ } } \ ] ] rules ( db3.a ) , ( db3.b ) , ( db4 ) and ( db5 ) have the lowest priority : they are to be applied in the `` laziest '' fashion .the rule ( db3.b ) ( `` _ push and below if _ '' ) is justified by the conditional left - cancellativity of ( cf .lemma f , _ appendix-2 _ ) .rule ( db5 ) is actually a ` narrowing ' step , justified by the fact that ` is a left - inverse ' for . for the completeness of the procedure, we shall also need a few more list inference rules which are `` dont - know '' nondeterministic ; namely , the rules ( db6.a)(db8 ) below : ( db6.a ) _ guess a nil - solution - branch for at a -peak _ : : : ( db6.b ) _ guess a nil - solution - branch for and at a -peak _ : : : ( db7.a ) _ guess a narrowing step for at a -peak _ : : : { { \mathcal{eq}}~ \cup ~ \ { v = _ { } ^ ?bc(u , x ) , \ ; u = _ { } ^ ?db(w , y\}\ } } { { \mathcal{eq}}~ \uplus ~ \ { u = _ { } ^ ?db(v , x ) , \ ; u = _ { } ^ ?db(w , y\ } } \ ] ] ( db7.b ) _ guess a narrowing step for at a -peak _ : : : { { \mathcal{eq}}~ \cup ~ \ { u = _ { } ^ ?bc(v , x ) , \ ; w = _ { } ^ ?bc(v , y\}\ } } { { \mathcal{eq}}~ \uplus ~ \ { u = _ { } ^ ?bc(v , x ) , \ ; u = _ { } ^ ?db(w , y\ } } \ ] ] ( db8 ) _ standard unification on _ : : : we denote by the inference system that extends with the list - inference rules ( db1)(db8 ) , given above .it is important to note that the occur - check violation rule ( l6 ) is henceforth to be applied to -unification problems in standard form , under the partial relation _ as has been refined above_. [ list - inf - dbc-1 ] let be any -unification problem , given in standard form .the inference system terminates on in polynomially many steps .this is an extension of proposition [ list - unifiable ] , to the inference system .the proof of that earlier proposition can be carried over practically verbatim : we only have to show that the new inferences that might introduce fresh variables , namely the three rules ( db3.a ) , ( db3.b ) and ( db4 ) , can not lead to a non - terminating chain of inferences .to ensure this , a first observation is that the relation , which was used in the proof of proposition [ list - unifiable ] , _ has to be refined now _ so as to take _ also _ into account the relation , the symmetric closure of , as follows : * if then .* let and ; then implies .a second observation is that these three rules which might introduce fresh variables remove a -edge at some node , and introduce a new -edge at a node such that ; but the number of -equivalence classes remains the same , by the same argument as developed in the proof of proposition [ list - unifiable ] .the other details of that earlier proof carry over verbatim . given any -unification problem in standard form , let denote the inference procedure based on the rules of , given above for its list - equations ; we augment the procedure with any given complete procedure for solving the residual set of element - equations in the problem , when the list - inference rules of are no longer applicable .we have then the following result : [ completeness-2 ] the procedure is sound and complete for solving -unification problems given in standard form .the proof uses the same lines of reasoning as for proposition [ complete ] .the procedure is sound , because to any solution of a problem derived under any of its inferences , corresponds a solution for the initial problem .the completeness of is again proved , for any given problem , by induction on the maximum number of inference steps needed for the termination of the procedure on the problem ; and using case analysis when necessary , based on the `` dont - know '' inference rules ( db6.a)(db8 ) above , for such an analysis .we leave out the details , which are straightforward .[ list - inf - dbc-2 ] let be a -unification problem in standard form , to which none of the inferences of is applicable. then its subset of list - equations with non - nil variables on the left - hand side is in -solved form .this extends proposition [ d - solved ] to the inference system .note that we just need to show the following : from any given node ] : otherwise one of the inferences ( db2)(db8 ) would have been applicable ; there can be no directed -cycle either at ] on the graph of , and the only `` dont - care '' rule applicable is the splitting rule ( db4 ) ; we can use the equation for that splitting .after cancellation on and a variable elimination step , the problem derived is : which is in d - solved form , and gives a solution .\(i ) the following problem : is in standard form , but is not in a -solved form . rule ( db1.c ) is applicable , and gives the `` nil '' solution to and , with arbitrary .\(ii ) the following problem is in standard form : , but not in a d - solved form ; the only applicable inference rule is ( db5 ) ( _ flip to _ conditionally ) , and the problem becomes : this is a -unification problem which is l - reduced , but not in a d - solved form .none of the list - variables is in * nonnil * ; so , an obvious easy solution is , the element - variables being arbitrary ; this corresponds to applying rule ( l8 ) .we could also nondeterministically apply the rule ( l10 ) ( _ standard unification on _ ) ; to deduce then the most general solution solution , namely : .the following problem is in standard ( but not in a d - solved ) form : observe that but , so the rule ( db5 ) ( _ flip to conditionally _ ) is applicable to the equation on ; and that gives : the problem now presents a -peak at which is in , so rule ( l4.b ) can be applied , by writing ; this , followed by cancellation on , and a standard unification step on , leads us to deduce : , and subsequently ; the problem is thus transformed ( after some variable elimination steps ) into : the rule ( db5 ) ( _ flip to conditionally _ ) is again applicable , now to the equation on ; we thus get : the rule ( l4.a ) ( _ semi - cancellation on at a -peak _ ) is now applicable , and we deduce : ; after variable elimination , the problem transforms to : which presents a -peak on , so the splitting rule ( l5 ) is applicable ; we write , and the problem evolves ( after variable elimination ) to : , the list - equations , as well as the element - equations , are now in -solved form ; and they do give a solution to the problem we started with ( as can be easily checked ) .we first addressed the unification problem modulo a convergent 2-sorted rewrite system , that models , in particular , the ( usual , xor - based ) cbc encryption mode of cryptography , by interpreting suitably the function in . a procedure is given for deciding unification modulo , which has been shown to be sound and complete ( and finitary ) when is either uninterpreted , or interpreted in such a manner . in the uninterpreted case , the procedure is a combination of the inference procedure presented in this paper , with syntactic unification ; it turns out to be of polynomial complexity , essentially for this reason . in the case where is interpreted as mentioned above , the unification procedure is a combination of with any complete procedure for deciding unification modulo the associative - commutative theory for xor ; and it turns out to be np - complete for this reason .the second part of the work extends into a theory that models , at an abstract level , a cipher - decipher block chaining scheme .unifiability modulo is shown to be decidable by an inference procedure , which essentially ` reduces ' any -unification problem in fine into one over .unification modulo is also ( finitary and ) np - complete . a point that seems worth mentioning here concerns the binary function symbol in .we have implicitly assumed that in practical situations ( such as in example 2 above ) the two arguments of are ` accessible ' ; this can be made more explicit by adding two ` projection ' equations to , using and on , to get the following set of equations : with typed as , and as .all these equations can be oriented left - to - right under a suitable simplification ordering , and the resulting rewrite system remains convergent .it is not difficult to check that , even after the addition of these two projection rules , unification problems with some very minor restrictions on the form of equations involving and can still be assumed in a standard form , and solved by the inference procedure given above . in other words , the results of section [ dbc ] remain valid for this enlarged 2-sorted convergent rewrite system that we shall again refer to as , since no confusion seems likely .the rewrite system thus enlarged can actually been shown to be -strong in the sense of , under a suitable precedence based ( lpo- or rpo- like ) simplification ordering , by taking to be the subsystem formed of the two rules ( 6.1 ) and ( 6.2 ) .it would then follow from proposition 11 of , that the so - called ` passive deduction ' problem , for an intruder , is decidable , if the intruder capabilities are modeled by this theory .this would yield , to our knowledge , the first purely rewrite / unification based approach for analyzing cryptographic protocols employing the cbc encryption mode .the details will be given elsewhere , where we also hope to present decision procedures for a couple of other security problems , where an intruder eavesdrops or guesses some low - entropy data in the context of block ciphers .finally , observe that unification modulo equational theories often serves as an auxiliary procedure in several formal protocol analysis tools , such as maude - npa , cl - atse , , for handling algebraic properties of cryptoprimitives .the work we have presented in this paper could be of use in these tools , as a first step towards the automation of attack detection in cryptographic protocols employing cbc .alpha m. abadi , v. cortier .`` deciding knowledge in security protocols under equational theories '' . 367(1 - 2):232 , 2006 .s. anantharaman , c. bouchard , p. narendran , m. rusinowitch .`` unification modulo chaining '' . in _ proc .of 6th int .conference on language and automata theory and applications - lata 2012 _ , lncs 7183 , pp .7082 , springer - verlag , 2012 .s. anantharaman , p. narendran , m. rusinowitch .`` intruders with caps '' . in _ proc . of the int .conference rta07 _ , lncs 4533 , pp .2035 , springer - verlag , 2007 .s. anantharaman , h. lin , c. lynch , p. narendran , m. rusinowitch .`` unification modulo homomorphic encryption '' .48(2):135158 ( 2012 ) f. baader , w. snyder .`` unification theory '' . in _handbook of automated reasoning _ , pp .440526 , elsevier sc .publishers b.v . , 2001 .m. bellare , r. gurin , p. rogaway .`` xor macs : new methods for message authentication using finite pseudorandom function '' in _ proc . of the intcrypt0 95 , lncs 963 , pp . 1528 , springer - verlag , 1995 m. baudet .`` deciding security of protocols against off - line guessing attacks '' . in _ proc . of the acm conf .on computer and comm . security _ , ccs05 , pp . 1625 , 2005 .h. comon - lundh , r. treinen . `` easy intruder deductions . ''verification : theory and practice , essays dedicated to zohar manna on the occasion of his birthday ( n. dershowitz , ed . ) . in _lncs _ 2772 , pp . 225242 , springer - verlag , 2003 .h. comon - lundh , v. shmatikov .`` intruder deductions , constraint solving and insecurity decision in presence of exclusive - or . '' in _ proc . of the logic in computer science conference , lics03 , _ pp .271280 , 2003 .n. dershowitz .`` termination of rewriting . ''3(1/2 ) : 69116 ( 1987 ) .d. dolev , s. even , r. karp , `` on the security of ping - pong protocols '' .55:57 - 68 ( 1982 ) .q. guo , p. narendran , d.a .`` unification and matching modulo nilpotence . '' in _ proc . of the 13th int .conf . on automated deduction , _( cade-13 ) , lncs 1104 , pp .261274 , springer , 1996 .`` canonical forms and unification . '' in _ proc . of the 5th int . conf . on automated deduction , _( cade-5 ) , lncs 87 , pp .318334 , springer , july 1980 .jouannaud , and c. kirchner .`` solving equations in abstract algebras : a rule - based survey of unification . '' in _ computational logic : essays in honor of alan robinson , _ 360394 , mit press , boston , 1991 .p. c. kanellakis , and p. z. revesz .`` on the relationship of congruence closure and unification . '' 7 : 427 - 444 ( 1989 ) . c. lynch , z. liu , `` efficient general unification for xor with homomorphism . '' in emproc . of the 23rd int .conference on automated seduction , ( cade-23 ) , lncs 6803 , pp .407421 , springer - verlag , 2011 . c. lynch , b. morawska , `` basic syntactic mutation . '' in em proc . of the 18th int .conference on automated deduction , ( cade-18 ) , lnai 2392 , pp .471485 , springer - verlag , 2002 .j. millen , h .-`` narrowing terminates for encryption . '' in _ proc .of the ninth ieee computer security foundations workshop ( csfw ) _ , pp .3944 , 1996 .k. g. paterson , t. ristenpart , t. shrimpton .`` tag size _ does _ matter : attacks and proofs for the tls record protocol '' in _ proc . of int .asiacrypt 2011 , lncs 2073 , pp .372389 , springer - verlag , 2011 .t. j. schaefer .`` the complexity of satisfiability problems . '' in _ proc . of the 10th annual acm symposium on theory of computing _ , pp .216226 , 1978 .* lemma a*. for all terms , we have : if and only if . the proof is by structural induction on the terms , based on the semi - cancellativity of and the cancellativity of . if either or is , then the other has to be too , and the assertion of the lemma is trivial .so suppose that and are not . then and , for some terms . substituting back into the original equation and applying the second axiom of , we deduce that : since is cancellative , we get : , and . from the semi - cancellativity of , we then deduce that : , and .therefore , by structural induction , we deduce that , and the result follows .* lemma b*. for all terms , we have : if and only if or . the proof is by exactly the same reasonings as for proving the previous lemma .we shall paraphrase these two lemmas together by saying that is `` conditionally '' semi - cancellative . * lemma c*. for all terms : if + then and . by applying the second axiom of , we get : cancellation on gives : and by lemma a above , this implies that . in what follows , by we shall mean the equational theory of section [ dbc ] , and the rewrite system it defines .as for the analogs of the above results for the operator of , we first observe that the function is not semi - cancellative more precisely , it is not right - cancellative : indeed , we have , although , in general .but left - cancellativity holds for . * lemma d*. [ dbc - left - can ] if then .we can assume wlog that the terms , , and are in normal form .if , then both and must be redexes , or , in other words , for some .since is semi - cancellative this leads to a contradiction . * corollary e*. [ dbc - g - neq ] if , and , then .so , the analog of lemma a for does not hold in general .however , is ` conditionally ' left - cancellative : * lemma f*. for all terms , we have : if and only if or .we just need to prove the `` only if '' assertion . if is not , then for some . applying the last axiom of , we get : .the assertion follows then from the cancellativity of and the left - cancellativity of .* lemma g*.[db - leftinversefor - bc ] let be the convergent rewrite system formed of the first five rules in the system of section [ dbc ] . for any list - term and element - term in -normal form , we have : .the proof is by structural induction on .the base case when is is trivial ; so suppose for some element - term , and list - term .substituting for and using first the equational axiom of , the left - hand side of the assertion becomes :
we investigate unification problems related to the cipher block chaining ( cbc ) mode of encryption . we first model chaining in terms of a simple , convergent , rewrite system over a signature with two disjoint sorts : _ list _ and _ element . _ by interpreting a particular symbol of this signature suitably , the rewrite system can model several practical situations of interest . an inference procedure is presented for deciding the unification problem modulo this rewrite system . the procedure is modular in the following sense : any given problem is handled by a system of ` list - inferences ' , and the set of equations thus derived between the element - terms of the problem is then handed over to any ( ` black - box ' ) procedure which is complete for solving these element - equations . an example of application of this unification procedure is given , as attack detection on a needham - schroeder like protocol , employing the cbc encryption mode based on the associative - commutative ( ac ) operator xor . the 2-sorted convergent rewrite system is then extended into one that fully captures a block chaining encryption - decryption mode at an abstract level , using no ac - symbols ; and unification modulo this extended system is also shown to be decidable .
the pound - drever - hall technique is commonly used to frequency stabilize lasers to optical cavity resonances .it was originally developed by pound for the frequency stabilization of microwave oscillators , and adapted to the optical domain by drever and hall and others. in brief , the source to be stabilized is frequency modulated . a diode detector ( a photodiode in the optical domain ) detects the reflection of the modulated source from a cavity .if the source is slightly detuned from a resonance , the diode detector signal will contain a component at the modulation frequency .when the source is on resonance , no component is observed at the modulation frequency . by mixing the diode signal with the modulation source ,we can obtain a suitable error signal for feedback control of the oscillator frequency ( zero on resonance , positive on one side and negative on the other ) .high bandwidth and a large capture range have made this technique popular for laser frequency stabilization in research laboratories .the technique is now rarely used for the stabilization of lower - frequency ( microwave ) oscillators , where a variety of alternative techniques exist .black has written a pedagogical article on the basic theory of the pound - drever - hall technique and an undergraduate experiment has been developed by boyd _et al._ to demonstrate laser frequency stabilization using the technique .a detailed guide to its implementation in a research context is available in ref . .the availability of inexpensive modular radio - frequency ( rf ) components has allowed us to develop a senior undergraduate experiment which is similar in spirit to the optical implementation of pound - drever - hall , but which uses rf electronics rather than optical equipment .the three main pieces of equipment are a commercial voltage controlled oscillator , a resonating cavity , and an integrating control circuit .the essence of the pound - drever - hall technique is the phase change in the cavity reflection coefficient as the frequency passes through a resonance .with rf electronics it is straightforward to directly observe this phase shift using an unmodulated source .this observation is the basis of the interferometric cavity locking techniques sometimes applied in the microwave regime. by directly observing the phase shift ( which is difficult in the optical domain due to the short wavelengths ) , the basis of the pound - drever - hall technique is reinforced .radio frequency electronics also provide a systematic way to vary the extent of source modulation and the cavity coupling .the experiment begins by observing the relation between the input voltage and the output frequency , which is known as the tuning curve of the voltage controlled oscillator .the cavity resonance is then observed by scanning the frequency of the voltage controlled oscillator and measuring the power reflected from the cavity .different coupling conditions can also be tested at this time .the real and imaginary parts of the reflection coefficient are investigated by mixing the reflected signal with a phase - shifted portion of the original signal to create a dispersion - like error signal which can be used to frequency stabilize the voltage controlled oscillator .the modulation properties of the voltage controlled oscillator are then investigated , and the relation between the modulation voltage , the tuning curve , and the strength of the frequency sidebands is confirmed . once the modulation is understood quantitatively ,the pound - drever - hall technique is implemented , and plots of the error signal as a function of the detuning of the oscillator from the cavity resonance are obtained . in the final step the voltagecontrolled oscillator is frequency stabilized using the pound - drever - hall error signal .locking can be verified by changing the temperature of the cavity and recording the stabilized frequency change using a frequency counter .the relation between the frequency and temperature can be used to determine the linear thermal expansion coefficient of copper . by changing the inner conductor it is also possible to measure the expansion coefficients of aluminum and super invar . in the following sections we explain these aspects of this experiment in more detail .the resonating cavity shown in fig .[ fig : cavity ] consists of a length coaxial transmission - line of characteristic impedance , with one ended shorted and the other open - circuited .although we refer to it as a `` cavity , '' the current node end is left open allowing both visual inspection and the inner cylinder to be easily changed .the length of the inner cylinder is one quarter of the desired resonant wavelength corresponding to .the resonant frequency is dictated by the availability of a suitable voltage controlled oscillator and cavity dimensions which are convenient for handling and inspection by students .the coaxial cavity type was chosen because it is similar to familiar resonating systems , such as transverse waves on strings .this configuration also allows us to observe the thermal expansion of the inner conductor because the resonant frequency is primarily determined by the length of the inner cylinder .the inner cylinder can also be changed to observe the thermal expansion of different materials .two holes for coupling loops are drilled in the cavity lid midway between the inner cylinder and the edge of the outer cylinder .the loops consist of 26 awg copper wire attached to sma connectors by soldering one end to the center pin and the other to ground .the sma connectors are inserted into brass cartridges which fit into the holes in the lid ( voltage node ) .the cartridges are labeled so that the angle of rotation can be read .the two loops are approximately and in area .the larger loop is used to couple power into the cavity , and the smaller loop is used to detect power from the cavity .the size of the large loop is dictated by the requirement that under , critical , and over coupling be observable by rotating the loop cartridge .the other loop is small to reduce its impact on the quality factor .the unloaded quality factor of this cavity , , is small compared to literature values [ ref . ,( 70 ) gives .we have constructed a similar cavity for research purposes with a single threaded hole for an sma - based coupling loop and have verified that the discrepancy is primarily due to the brass cartridges .however , the large , easily adjustable coupling loops , and the relatively small are advantageous for this experiment .the experiment is based on a voltage controlled oscillator ( minicircuits , zx95 - 850-s+ ) which has a central frequency which approximately matches the resonant frequency of the cavity ( ) and a modulation bandwidth much greater than the pound - drever - hall modulation frequency ( ) . to observe the cavity resonance , a ramp functionis applied to the tuning port of the voltage controlled oscillator while the output is connected to the large coupling loop through an isolator and then circulator . the signal that reflects from the cavity exits the circulator andis amplified before entering a detector diode followed by a 5k resistor in parallel to ground .the diode voltage is observed using an oscilloscope .the detector diode voltage to power relation was measured and is provided to the students .this relation depends on the load that the diode is driving .a 5k parallel load resistor is used to ensure that the load is consistent between different oscilloscopes .the students are asked to explore over , under , and critical coupling by varying the angle of the input coupling loops , with critical coupling being characterized as having the smallest reflected power on resonance .once critical coupling is found and the loops are secured in this position , the reflected signal is analyzed to find the loaded quality factor , , of the cavity ( see fig .[ fig : qdetermine ] ) . to determine this value , a model for the reflection coefficient of the cavity must be determined .the reflection coefficient of a one - port is defined as , where and are phasors representing the incident and reflected traveling wave amplitudes at the location of the one - port .( we use to signify phasor quantities . )if an impedance is driven through a transmission line of characteristic impedance , the reflection coefficient can be calculated to be is determined by fitting a lorentzian to the reflected power [ see eq . ] .a linearly varying incident power has been included in the fit to accommodate for the frequency dependent losses of components other than the cavity.,width=302 ] a cavity coupled to a transmission line can be modeled as a lumped element resonant circuit of total impedance in the vicinity of a resonance , allowing its reflection coefficient to be calculated using eq . .although the equivalence of the lumped circuit model can be established under quite general conditions, a heuristic motivation specific to our situation will be given here .near resonance , the voltage node end of a coaxial resonator behaves like a series lcr resonant circuit a large amount of current flows for a small oscillating voltage applied between the inner and outer conductors .the input coupling loop interacts primarily with the oscillating magnetic field at this end , so we model the coupling using a non - ideal transformer , as shown in fig .[ fig : impedance ] .the secondary of the transformer is assumed to be part of the lcr resonator .( we ignore the second smaller loop in our cavity and assume that its contribution to cavity loss can be incorporated into . ) if we use the phasor relations for the transformer ( see fig .[ fig : impedance ] ) where is the coupling coefficient , and we find that is given by for a series lcr resonator , where is the angular frequency at resonance .we define , assume , and simplify the expression for as equation can be interpreted as equivalent to the impedance of a parallel lcr circuit near resonance , with a resistance of in series with an inductor .a coupling coefficient may be defined as so that by using eqs . and , may written as where is a phase factor of unit magnitude , and .this frequency shift due to coupling is small , and we will assume that .we estimate the impedance of to have a magnitude of at , which is comparable to ( ) .hence , contributes a significant phase to the overall reflection coefficient .this additional phase can be compensated for by introducing the appropriate phase change by an adjustable delay line . to simplify the following discussion of phase, we define a phase - shifted reflection coefficient .when looking at the reflected power we are interested in ^ 2}. \label{gamma2}\ ] ] for critical coupling , and at resonance . under couplingcorresponds to and over coupling to . if we define the loaded quality factor , where is the full - width half - maximum of the resonance , we find from eq .that .thus critical coupling ( ) is a particularly convenient configuration for the determination of , and it is straightforward to experimentally identify ( at resonance ; see fig . [fig : qdetermine ] ) .the rest of the experiment is done with critical coupling to simplify the derivations .the expression for the reflection coefficient is analogous to the optical case, provided that the optical cavity finesse is sufficiently high .the pound - drever - hall technique is sensitive to how the real and imaginary parts of the reflection coefficient vary with frequency near resonance . in particular , it is significant that the imaginary part of is anti - symmetric about the resonance , and falls to zero away from the resonance .in contrast , the real part of is symmetric about the resonance , and approaches away from resonance .students can observe the imaginary and real parts of the reflection coefficient by mixing the reflected signal with a phase - shifted version of the incident signal ( the reference ) .the technique is illustrated in fig .[ fig : part_c_setup ] . an adjustable coaxial air delay line ( general radio co. 874-la ) is used to set the relative phase between the reference and reflected signals .( the delay line can be replaced with a phase shifter if the instructions are modified to accommodate the phase shifter s mechanism for changing the phase . ) with the loop detached from the cavity , the reference phase for the detection of can be set by adjusting the length to produce the largest negative dc output signal from the mixer . when the cavity is reattached, the mixer output will indicate , as shown in fig .[ fig : parts ] .when the length of the adjustable delay line is increased or decreased by , the mixer output will indicate or respectively , which is also shown in fig .[ fig : parts ] .the dispersion - like signal for is suitable as an error signal to frequency stabilize the voltage controlled oscillator ( sometimes known as interferometric locking). there are some discrepancies between the theoretical and observed reflection coefficients in fig .[ fig : parts ] .the slight asymmetries are partially due to imperfect adjustment of the delay line . in addition, the asymptotic behavior of is influenced by the fact that the delay line is not a perfect , frequency - independent phase shifter .the phase shift variation with frequency can be calculated , and improves the agreement between the theory and observations , as shown in fig .[ fig : parts ] .although the theoretical reflection coefficient is calculated assuming that , we found that eliminating this assumption does not significantly improve agreement .we note that due to the nature of the cavity design , the cartridges can rotate slightly or loosen while the setup is changed between measuring the resonance and the mixed signals , which causes the quality factor and/or the coupling constant to change .( we assume that the coupling constant . ) in another design we tapped a hole directly into the lid of the cylinder so that there are no cartridges involved and the angles of the loops are fixed. this configuration might be more desirable , because it provides more reliable parameters for the theoretical calculation . .( a ) transmission through the cavity .( b ) observation of the real part of the cavity reflection coefficient ( to within a positive scale factor ) .( c ) observation of the imaginary part of the reflection coefficient ( to within a positive scale factor ) . the curve labeled `` delay line effect '' is a calculation accounting for the variation in phase shift of the delay line with frequency .the calculations are vertically scaled for the best least squares fits.,width=302 ]frequency modulation of the source to be stabilized ( the voltage controlled oscillator in this case ) is fundamental to the pound - drever - hall technique .when a time - dependent voltage is applied to the tuning port of the voltage controlled oscillator , we expect frequency modulation if is within the voltage controlled oscillator s modulation bandwidth .we approximate the tuning curve of the voltage controlled oscillator by , and write the time - dependence of the frequency as , where . because the phase is the time integral of the angular frequency , the output of the voltage controlled oscillator can be written in the phasor form : where ] , and ^ 2p_0 ] and ^ 2p_0 ] is antisymmetric about the resonant frequency. therefore it can be used as an error signal in a feedback loop to control the voltage controlled oscillator frequency .its sign indicates whether the voltage controlled oscillator frequency should be lowered or raised to keep it matched with the cavity resonance . in the diode outputthis desired error signal is modulated by , so it must be converted to dc and isolated from the rest of the terms in eq . .this function can be performed by mixing the output of the diode with a reference signal and subsequent filtering .the reference can be obtained by splitting off a fraction of the voltage controlled oscillator modulation source output and applying an appropriate phase shift . in optical implementations of pound - drever - hall method ,the reflected power from an optical cavity is detected by a photodiode , and eq .is an expression for the photocurrent . in thisall rf method we use a schottky diode detector ( pasternack pe8000 - 50 ) for the same purpose ( see fig . [fig : part_g_setup ] ) . to verify that the pound - drever - hall method provides a suitable error signal students scan the voltage controlled oscillator frequency by applying a ramp to its tuning port , with rf modulation added through a bias t. a full scan , shown in fig .[ fig : pdherror ] , shows the characteristic features of the pound - drever - hall error signal. we can compare the observations to theory : 7000 .all items were bought new , with the exception of the frequency counter and adjustable delay line ( both of these were obtained from used test - equipment dealers ) .many of the components employed are generic , and may be available in a standard undergraduate physics laboratory ( frequency counter , oscilloscope , and function generator ) .fabrication of the resonator was straightforward , and requires access to a lathe , milling machine , and a drill press .the outer cylinder was cut from a tube of the required size to minimize the required machining .the brass cartridges were manufactured using a computer numerical control ( cnc ) mill ; it is possible to create these using a conventional milling machine and a lathe if a cnc mill is not available .the super invar inner cylinder was silver plated by a local shop for $ 250 .to date , this experiment has been performed by six groups of undergraduates at the university of waterloo . to completethe entire experiment typically takes two sessions of approximately four hours each . in an abbreviated single sessionthe voltage controlled oscillator can be locked to the cavity using the interferometric technique , and thermal expansion measured using this lock .we omit an investigation of voltage controlled oscillator modulation and the pound - drever - hall error signal .the pound - drever - hall technique is primarily confined to use in laser physics .a broader appeal of the experiment is that students gain familiarity with using modular rf components such as mixers and splitters . to assist the students with minimal direct involvement we have developed enhanced web - based apparatus diagrams, which students consult when doing the experiment . as a cursoris moved over the components in a diagram such as fig .[ fig : part_g_setup ] , a photograph of the physical device appears , together with the manufacturer s part number and links to additional information .although designed for undergraduates , this experiment is also useful for new graduate students and researchers who are interested in learning about pound - drever - hall locking and locking to optical cavities in general .for example , interferometric observation of the reflection phase shift ( see fig .[ fig : parts ] ) provides insight into the hnsch - couillard locking technique. we gratefully acknowledge the assistance of zhenwen wang , j. szubra , and h. haile of the university of waterloo science technical services .we thank c. bennett , j. carter , s. de young , and a. lupascu for comments on the manuscript .this work was supported by the natural sciences and engineering research council of canada .r. w. p. drever , j. l. hall , f. v. kowalski , j. hough , g. m. ford , a. j. munley , and h. ward , `` laser phase and frequency stabilization using an optical resonator , '' _ appl .b _ , vol . 31 , pp . 97105 , 1983 .r. w. fox , c. w. oates , and l. hollberg , `` stabilizing diode lasers to high finesse cavities , '' in _ experimental methods in the physical sciences ; cavity - enhanced spectroscopies _ ,40 , p. 46, 2001 .e. ivanov , m. tobar , and r. woode , `` applications of interferometric signal processing to phase - noise reduction in microwave oscillators , '' _ ieee trans .microwave theory tech ._ , vol .46 , pp .1537 1545 , 1998 . c. e. liekhus - schmaltz , r. mantifel , m. torabifard , i. b. burgess , and j. d. d. martin , `` injection - locked diode laser current modulation for pound - drever - hall frequency stabilization using transfer cavities . ''http://arxiv.org/abs/1109.0338[arxiv/1109.0338 ] .r. beringer , `` resonant cavities as microwave circuit elements , '' in _ principles of microwave circuits _ ( c. g. montgomery , r. h. dicke , and e. m. purcell , eds . ) , vol . 8 of _ mit radiation laboratory series _ , pp . 207282 , mcgraw - hill , 1948 .`` university of waterloo , phys 360/460 experiment # 10 : radio - frequency electronics and frequency stabilization . ''http://science.uwaterloo.ca/~jddmarti / teaching / phys360_460/rf_exp / rf_% exp.html[http://science.uwaterloo.ca/~jddmarti / teaching / phys360_460/rf_exp / rf_% exp.html ] .
we have developed a senior undergraduate experiment that illustrates frequency stabilization techniques using radio - frequency electronics . the primary objective is to frequency stabilize a voltage controlled oscillator to a cavity resonance at using the pound - drever - hall method . this technique is commonly applied to stabilize lasers at optical frequencies . by using only radio - frequency equipment it is possible to systematically study aspects of the technique more thoroughly , inexpensively , and free from eye hazards . students also learn about modular radio - frequency electronics and basic feedback control loops . by varying the temperature of the resonator , students can determine the thermal expansion coefficients of copper , aluminum , and super invar .
let be a hilbert space with the inner product and the norm . in this paper , we study the following ultraparabolic equation associated with the initial conditions where is a positive - definite , self - adjoint operator with compact inverse on and are known smooth functions satisfying for compatibility at and is a nonlinear source function satisfying some conditions which will be fully presented in the next section .the problem ( [ eq:1])-([eq:2 ] ) involving multi - dimensional time variables is called the initial - boundary value problem for ultraparabolic equation .the ultraparabolic equation has many applications in mathematical finance ( e.g. ) , physics ( such as multi - parameter brownian motion ) and biological model . among many applications ,the equation ( [ eq:1 ] ) arises as a mathematical model of population dynamics , for instance , the dynamics of the age structure of an isolated at the distinct moments of astronomical or biological time and in this application plays a role as the number of individuals of age in the population at time .the study of ultraparabolic equation for population dynamics can be found in some papers such as . in particular , kozhanov studied the existence and uniqueness of regular solutions and its properties for an ultraparabolic model equation in the form of where is laplace operator , is a nonlocal linear operator . in the same work ,the authors deng and hallam in considered the age structured population problem formed associated with non - locally integro - type initial - bounded conditions .the ultraparabolic equation is also studied in many other aspects . in the phase of inverse problems , lorenzi studied the well - posedness of a class of an inverse problem for ultraparabolic partial integrodifferential equations of volterra type .very recently , zouyed and rebbani proposed the modified quasi - boundary value method to regularize the equation ( [ eq:1 ] ) in homogeneous backward case in a class of ill - posed problems . for another studies regarding the properties of solutions of abstract ultraparabolic equations, we can find many papers and some of them are refered to . even though the numerical method for such a problem is studied long time ago , it is still very limited .we only find some papers , such as .the authors akrivis , crouzeix and thome investigated a backward euler scheme and second - order box - type finite difference procedure to numerically approximate the solution to the dirichlet problem for the ultraparabolic equation ( [ eq:1])-([eq:2 ] ) in two different time intervals with the laplace operator and source function .recently , ashyralyev and yilmaz constructed the first and second order difference schemes to approximate the problem ( [ eq:1])-([eq:2 ] ) for strongly positive operator and obtained some fundamental stability results . on the other hand , marcozzi et al . developed an adaptive method - of - lines extrapolation discontinuous galerkin method for an ultraparabolic model equation given by with a certain application to the price of an asian call option .however , we can see that most of papers for numerical methods aim to study linear cases .equivalently , numerical methods for nonlinear equations are investigated rarely .therefore , in this paper we shall study the model problem ( [ eq:1])-([eq:2 ] ) in the numerical angle for the smooth solution . from the idea of finite difference scheme and conveying a fundamental result in operator theory , we construct an approximate solution for the nonhomogeneous equation in terms of fourier series . combining the same technique and linear approximation ,the approximate solution for the nonlinear case is established .the rest of the paper is organized as follows . in section 2, we shall consider the linear nonhomogeneous problem ( [ eq:1])-([eq:2 ] ) under a result of presentation of discretization solution in multi - dimensional problem .the nonlinear problem is considered in section 3 and an iterative scheme is showed .finally , four numerical examples are implemented in section 4 to verify the effect of the method .in this section , we shall introduce the suitable discrete operator used in the time discretization . in order to define the discrete operator involved in the equation in the problem ( [ eq:1])-([eq:2 ] ), we consider the multi - dimensional problem given by associated with initial conditions for ,1\le i\le d ] , then one has for all . we shall prove ( [ eq:12 ] ) by induction .we can see that it holds for . for , we have thus , ( [ eq:12 ] ) holds for .now , we assume that ( [ eq:12 ] ) holds for .it means that we shall prove that it aslo holds for .indeed , we get therefore , ( [ eq:12 ] ) holds for . by induction, we completely finish the proof . from ( [ eq:12 ] ), we shall obtain the discrete solution by fourier series .it should be stated that the discrete solution is , in multi - dimensional , involved by many situations according to the set , more exactly situations .as introduced , in this paper we aim to consider the ultraparabolic problem with two time dimension since there are many studies on this problem in real application .hence , the solution of the discrete problem of ( [ eq:1])-([eq:2 ] ) in linear nonhomogeneous case is and its explicit form is given as follows .if , we replace by to get similarly , for we replace by to obtain from ( [ eq:13])-([eq:14 ] ) , we conclude the discrete solution for the two - time - variable ultraparabolic ( [ eq:1])-([eq:2 ] ) in linear nonhomogeneous case is furthermore , we obtain a stability result in the following theorem .let in ( [ eq:15 ] ) be the discrete solution of the problem ( [ eq:1])-([eq:2 ] ) in linear nonhomogeneous case , then there exists a positive constant independent of and such that since and , by using parseval s identity we have for .similarly , for we also have combining ( [ eq:16 ] ) and ( [ eq:17 ] ) , we conclude that which gives the desired result . the stability result for the ultraparabolic problem in multi - time dimension ( [ eq:9])-([eq:10 ] )can be obtained in the same way according to the situations the discrete solution has .particularly , one has now , numerical methods for nonlinear ultraparabolic equations are still very rare . from that point , we begin the section by considering the ultraparabolic problem ( [ eq:1])-([eq:2 ] ) with the nonlinear function satisfying lipschitz condition . by simple calculation analogous to the steps in linear nonhomogeneous case , we get the discrete solution , then use linear approximation to get the explicit form of the approximate solution . particularly , the following problem is considered . for the nonlinear source function satisfying the lipschitz condition : where is a positive number independent of . on account of the orthonormal basis admitted by and corresponding eigenvalues ,the problem can be made in the following manner . with , the problem ( [ eq:19 ] ) is equivalent to the following problem . for the numerical solution of this problem by finite difference scheme as introduced in the above section , a uniform grid of mesh - points is showed . here and , where and are integers and the equivalent mesh - width in time and .we shall seek an discrete solution determined by an equation obtained by replacing the time derivatives in ( [ eq:20 ] ) by difference quotients .the equation in ( [ eq:20 ] ) becomes and the initial conditions are where and . by induction , it follows from ( [ eq:21 ] ) that for all .thus , the explicit form of discrete solution of is obtained . from now on , we shall give an iterative scheme by knowledge of linear approximation .choosing , we seek satisfying here is called the approximate solution for the problem ( [ eq:1])-([eq:2 ] ) .our results are to prove that this solution approach to the discrete solution in norm as and study the stability estimate of in norm with respect to the initial data and the right hand side .let be the iterative sequence defined by ( [ eq:23 ] ) .then , it satisfies the a priori estimate where is a positive constant depending only on . if the nonlinear source function of the problem satisfying the lipschitz condition ( [ eq:18 ] ) .then , the iterative sequence defined by ( [ eq:23 ] ) strongly converges to the discrete solution ( [ eq:22 ] ) of in norm in the sense of where is a positive constant depending only on . the problem ( [ eq:1])-([eq:2 ] ) with the nonlinear function in a bit larger class in terms of non - lipschitz functions can be solved in the similar way .specifically , the function will be defined by the product of two functions and .we also study the a priori estimate of solution and obtain convergence rate between the approximate solution and the discrete solution .this study continuously contributes to the state of rarity of numerical methods for the nonlinear ultraparabolic problems . in particular, we shall consider the following problem . for and satisfying the following conditions . where are positive constants independent of .similar to the problem , the approximate solution of is given by let be the iterative sequence defined by ( [ eq:25 ] ) .then , it satisfies the a priori estimate where is a positive constant depending only on .if the nonlinear source function of the problem satisfying the conditions ( [ eq:24 ] ) , then the iterative sequence defined in ( [ eq:25 ] ) strongly converges to the discrete solution ( [ eq:22 ] ) of in norm in the sense of where is a postive constant depending only on .assume that is the exact solution of the problem ( also ) at .by finite difference scheme , we know that the error between the exact solution and discrete solution is of order . therefore , by triangle inequality , the error estimate between the exact solution and approximate solution is for the problem ( also ) where is a positive constant independent of and , provided by the smoothness of the exact function .using parseval s identity in ( [ eq:23 ] ) for the case , we have similarly , we can deduce for the case that moreover , from the condition ( [ eq:18 ] ) , we have combining ( [ eq:26])-([eq:28 ] ) and putting , we get for short , we denote . putting , it follows from ( [ eq:23 ] ) that for we have thus , we get we can always choose small enough such that . then , we have therefore , we obtain which leads to the claim that is a cauchy sequence in and then , there exists uniquely such that as .because of this convergence and lipschitz property ( [ eq:18 ] ) of nonlinear source term , it is easy to prove that as .therefore , is the discrete solution of the problem .when , it follows ( [ eq:29 ] ) from that for , we also have a similar proof .hence , we complete the proof of the theorem .in this section , we are going to show four numerical examples in order to validate the efficiency of our scheme .it will be observed by comparing the results between numerical and exact solutions .we shall choose given functions in such a way that they lead to a given exact solution . in details , we have four examples implementing all considered cases .the first and second examples are of the linear nonhomogeneous case while the rest of examples are showed for nonlinear cases and implying lipschitz and non - lipschitz functions .the examples are involved with the hilbert space and associated with homogeneous boundary conditions . on the other hand ,numerical results with many 3-d graphs shall be discussed in the last subsection .we consider the problem \times\left[0,1\right],\\ u\left(x,0,s\right)=\alpha\left(x , s\right ) & , \left(x , s\right)\in\left[0,\pi\right]\times\left[0,1\right],\\ u\left(x , t,0\right)=\beta\left(x , t\right ) & , \left(x , t\right)\in\left[0,\pi\right]\times\left[0,1\right ] , \end{cases}\ ] ] where and . in this example , we see that , then we get an orthonormal eigenbasis where the eigenvalues . therefore , by time discretization and the approximate solution ( [ eq:15 ] ) is for and and for . after dividing the space interval ] and are corresponding weights , is a given constant .we shall consider the following example \times\left[0,\frac{1}{10}\right],\\ u\left(x,0,s\right)=\alpha\left(x , s\right ) & , \left(x , s\right)\in\left[0,\pi\right]\times\left[0,\frac{1}{10}\right],\\ u\left(x , t,0\right)=\beta\left(x , t\right ) & , \left(x , t\right)\in\left[0,\pi\right]\times\left[0,\frac{1}{10}\right ] , \end{cases}\ ] ] where in this example , we can deduce the orthonormal eigenbasis and the eigenvalues . herethe nonlinear function implies and satisfying the theoretical assumptions .we shall construct an approximate solution by steps like in example 3 . for : where for : where denoting , we compute the discrete -norm and -norm of by where is a set of points on uniform grid \times\left(0,t\right]\times\left(0,t\right]$ ] and cardinality of . in our computations, we always fix and . the comparison between the exact solutions and the approximate solutions for the examples respectively are shown in figure 1-figure 4 in graphical representations . as in these figures , we can see that the exact solution and the approximate solution are close together .furthermore , convergence is observed from the computed errors in table 1-table 4 for example 1-example 4 , respectively , which is reasonable for our theoretical results ..numerical results ( [ eq:35 ] ) for example 1 with . [ cols="^,^,^,^,^",options="header " , ]in this paper , we have studied the numerical method for solving a class of nonlinear ultraparabolic equations in abstract hilbert spaces , namely problem ( [ eq:1])-([eq:2 ] ) .our numerical approach is based on finite difference method and representation by fourier series .the method not only serves the ultraparabolic problems in multi - space dimension but also deals with a wide class of nonlinear ultraparabolic problems that many recent studies do not cover .moreover , it is useful in numerical simulations when one wants to construct a stable , reliable and fast convergent approximation .some stability results and error estimates are obtained .lots of numerical examples are showed to see the efficiency of the method .in fact , it should be stated that fourier series expression of solution may lead a limitation of the method for applications in a complicated domain where a solution can not be expressed by a certain series . on the other hand , numerical method for a class of nonlinear equations in a large time scale with a better convergence rateshould be developed .all of these issues will be surveyed in a further research .the authors declare that they have no competing interests .all authors , vak , ltl , ntyn and nht contributed to each part of this work equally and read and approved the final version of the manuscript .the authors wish to express their sincere thanks to the referees for many constructive comments leading to the improved version of this paper .fadugba s. e. , edogbanya o. h. , zelibe s. c. , crank nicolson method for solving parabolic partial differential equations , international journal of applied mathematics and modeling , vol.1 , no . 3 , 8 - 23 , 2013 . a. i. kozhanov , on the solvability of boundary value problems for quasilinear ultraparabolic equations in some mathematical models of the dynamics of biological systems , journal of applied and industrial mathematics , 2010 , vol .4 , pp . 512525 . v. s. dron and s. d. ivasyshen , properties of the fundamental solutions and uniqueness theorems for the solutions of the cauchy problem for one class of ultraparabolic equations , ukrainian mathematical journal , vol .50 , no . 11 .1998 .m. ghergu , v. d. radulescu , nonlinear pdes : mathematical models in biology , chemistry and population genetics , springer monographs in mathematics , 2011 .a. ashyralyev , s. yilmaz , modified crank nicholson difference schemes for ultra - parabolic equations , computers and mathematics with applications * 64 * ( 2012 ) 27562764 .
in this paper , our aim is to study a numerical method for an ultraparabolic equation with nonlinear source function . mathematically , the bibliography on initial - boundary value problems for ultraparabolic equations is not extensive although the problems have many applications related to option pricing , multi - parameter brownian motion , population dynamics and so forth . in this work , we present the approximate solution by virtue of finite difference scheme and fourier series . for the linear case , we give the approximate solution and obtain a stability result . for the nonlinear case , we use an iterative scheme by linear approximation to get the approximate solution and obtain error estimates . some numerical examples are given to demonstrate the efficiency of the method .
assim como o conceito de teleporte quntico , tecnologias qunticas hoje no so mais apenas elementos principais de filmes de fico cientfica .devido ao grande avano de estudos em mecnica quntica e a tentativa de entender a natureza quntica do universo e como us - la em algo til , hoje vivemos um cenrio que antes eram apenas possveis na imaginao .na primeira metade da dcada de 80 iniciava - se a elaborao dos fundamentos que sustentam a pesquisa em computao quntica ( cq ) , graas aos trabalhos de paul benioff , richard feymman e david deutsch .a partir desses trabalhos , o sonho do computador quntico tem sido buscado devido a sua capacidade terica de resolver certas classes de problemas muito mais rpido que um computador clssico .no dia de 04 de maio de 2016 , a ibm e seus cientistas da computao quntica anunciaram aquela que provavelmente foi a mais motivadora notcia para muitos que fazem pesquisas nessa rea .o primeiro computador quntico de acesso remoto e publico foi disponibilizado pela equipe da ibm . a ideia que qualquer pessoa ( no necessariamente pesquisadores da rea ) possam ter acesso remoto a uma plataforma conhecida como _ibm quantum experience _ ( ibm - qe ) .a proposta da ibm que com essa plataforma qualquer pessoa possa simular e at mesmo executar uma computao em um computador quntico de q - bits . claro que no se pode fazer muito com apenas q - bits , mas a ideia que possamos testar o ibm - qe para determinar se realmente estamos lidando com um computador quntico de fato .nesse intuito , o ibm - qe tem sido usado recentemente em diversos protocolos de cq , em especial tem - se ilustrado o experimento real por meio do teleporte quntico onde uma discusso mais detalhada do aparato experimental feito . essa determinao o atual problema que cientistas da cq esto tentando resolver em prottipos como os chips qunticos da ibm , ditos qunticos .neste paper ns apresentamos a comunidade as caractersticas e sutilezas do ibm - qe , que tem duas propostas bsicas .a primeira delas a divulgao cientfica de tal aparato , a segunda uma proposta didtica de que o ibm - qe possa ser uma ferramenta com certo valor ao introduzirmos certos conceitos em computao e informao quntica .o paper est estruturado da seguite forma .na seo [ elementos ] ns faremos uma breve introduo cq , mas sempre focando em discutir algumas definies que sero teis para o nosso desenvolvimento .no entanto , quando necessrio deixaremos referncias de livros - texto da rea onde informaes adicionais podem ser encontradas .na subseo [ quantumteleporte ] ns apresentaremos o modelo de teleporte quntico e como podemos implement - la em um computador quntico .na seo [ ibmqe ] ns apresentaremos o computador quntico da ibm e o ibm - qe , discutindo suas capacidades e caractersticas , onde mostramos um protocolo que pode ser usado para analisar os efeitos de decoerncia no computador quntico da ibm , que ser discutido na seo [ deco ] .como exemplo , na subseo [ quantumteleporteexpe ] , ns implementamos o protocolo de teleporte e discutimos como interpretar os resultados fornecidos pela plataforma ibm - qe .antes de discutir as propriedades e servios disponveis no computador quntico da ibm , bem como na sua plataforma ibm - qe , deixe - nos dedicar essa seo a discutir os fundamentos da computao quntica .para uma abordagem mais detalhada , recomendamos uma boa referncia como sendo a ref . em computadores clssicos , a menor unidadede informao definida como _ bit _ e , a partir deste , podemos definir outras quantidades como o _ byte _ ( 8 bits ) , _ megabyte _ ( bytes ) , e assim por diante .de forma anloga , tambm temos uma unidade fundamental de informao em computao quntica e , no por coincidncia , chamamos de _ q - bit _ , que a traduo para portugus da definio _ qu - bit _ que , por sua vez , uma forma compacta para nos referirmos a um _ quantum bit _ ( bit quntico ) .mas nos perguntamos : o que ganhamos de interessante ( novo ) nessa histria toda ?em geral , os _q_-bits so representados fisicamente por estados ortogonais associados a qualquer sistema quntico de dois nveis de energia ( estados de polarizao vertical e horizontal de ftons , estados de spin do eltron , etc . )e denotados pelos _ vetores de estado _abstratos e , que so os chamados _ estados da base computacional _( em analogia ao e de computadores clssicos ) .o que ganhamos na transio do bit para o _q_-bit que existem peculiaridades da mecnica quntica que nos permite combinar esses estados a fim de obter alguma vantagem com relao aos bits .de fato , considere o estado de superposio , onde ( condio de normalizao do estado ) , que um tipo de configurao muito comum em mecnica quntica .podemos perceber que o estado representa uma superposio de estados da base computacional , ao mesmo tempo que ns precisamos apenas de uma nica partcula para `` escrever '' esse estado , o que no possvel em computadores clssicos. a vantagem do computador quntico aparece quando consideramos mais de um _ q_-bit . deixe - nos considerar um exemplo muito simples de um estado particular de um sistema composto por dois sistemas de dois nveis , que carrega todas as combinaes possveis para dois bits .podemos perceber que com apenas dois _q_-bits ns podemos armazenar uma quantidade de informao que requer 8 bits ( 1 byte ) em computadores clssicos , isso nos d uma noo de que podemos ganhar uma economia muito boa em espao fsico com tecnologias qunticas .outro recurso fundamental em computadores qunticos , bem como em toda a teoria da informao quntica , o chamado _emaranhamento_. de forma simples , podemos definir o emaranhamento como uma quantidade fsica ( pode ser mensurada ) associada a duas partculas que nos impossibilita de caracterizar completamente o estado de uma partcula independente da segunda .existe tambm uma viso matemtica que mais simples de caracterizar o emaranhamento , que a noo de separabilidade .para exemplificar , considere o seguinte estados de dois _q_-bits e como ento dizemos que este estado separvel ( no emaranhado ) se , e somente se existem coeficientes tais que podemos escrever o estado acima como embora essa forma como definimos acima o emaranhamento como consequncia da no separabilidade seja independente de qual sistema estamos interessados , existem situaes onde o emaranhamento no to simples de perceber .em geral , quando temos estados mistos a situao pode ser mais drstica e assim recomendamos a ref . para um estudo mais aprofundado sobre o tema .para os exemplos que vamos tratar nesse material , o mtodo acima o mais indicado , dado sua simplicidade .o emaranhamento desempenha um papel essencial em teoria da informao e computao quntica , pois quando combinado com a propriedade de superposio , este nos fornece protocolos que ilustram grande eficincia de um computador quntico com relao a um computador clssico .uma esmagadora parte dos pesquisadores em computao e informao quntica destacam o algoritmo de shor como sendo o exemplo mais claro da vantagem de um computador quntico com relao a um computador clssico .o algoritmo de shor usado para fatorar eficientemente nmeros primos com muitos dgitos que so intratveis em computadores clssicos .alm desse , podemos mencionar o algoritmo de deusch - josza ( diferenciar funes constantes de funes balanceadas ) e algoritmo de grover ( algoritmo de busca em uma lista desordenada ) como algoritmos que ilustram o terico potencial esperado por um computador clssico .essa discusso feita muito introdutria , uma vez que o nosso foco no falar da distino entre computadores clssicos e qunticos , mas deixamos como uma fonte de leitura ( uma boa leitura , por sinal ) a ref . , onde o autor destaca de forma clara o surgimento da computao quntica e do seu impacto no desenvolvimento tecnolgico .o modelo padro de cq chamado de _ modelo de circuitos _ .nos propomos a discutir sobre esse modelo pelo fato de que esse modelo de computao que est disponvel para o usurio no ibm - qe .a ideia do modelo de circuitos em computao quntica equivalente ao que usado em computao clssica , onde representamos o processo de computao atravs de uma sequncia de portas lgicas qunticas ( transformaes unitrias em mecnica quntica ) que so aplicadas a uma configurao de entrada ( input ) e nos fornece uma configurao de sada com o resultado da computao ( output ) .a motivao desse modelo repousa sobre o fato de que existem manipulaes que podemos fazer sobre o estado quntico de uma partcula que nos faz lembrar da ao de portas em computadores clssicos .por esse motivo ns definimos as _ portas qunticas _ de um _ circuito quntico _ , que juntos so os anlogos qunticos das portas que compe um circuito em um computador clssico .um exemplo de porta quntica que tem um anlogo em computao clssica a porta not , cuja ao inverte o bit de entrada de ou , que tem como anlogo quntico a porta quntica representada pelo operador de pauli , que ao atuar em um estado de spin na direo inverte o estado de e de .por outro lado , existem portas exclusivas da cq , como a porta hadamard , onde sua atuaoleva estados da base computacional em superposies desses estados e vice - versa .uma caracterstica comum entre cq e computao clssica so os chamados _ conjuntos de portas universais para computao _ .esses conjuntos so assim chamados devido ao fato de seus elementos poderem ser combinados para realizar qualquer porta lgica de um circuito. assim como em computao clssica , ns temos um conjunto de portas elementares em cq .dentro desse conjunto de portas elementares ,ns podemos identificar subconjuntos de portas que podem ser usados para construir os chamados _ conjuntos universais de portas qunticas_. por definio esses conjuntos so compostos por portas elementares que podem ser combinadas de tal maneira que nos permite simular o funcionamento de qualquer porta de um circuito quntico .exemplos desses conjuntos so os conjuntos \{ + rotaes de 1 q - bit} e o conjunto \{toffoli , hadamard } .alm disso , existem conjuntos de portas que permitem universalidade _ aproximada _ , como o conjunto \{ + hadamard + porta } .quando necessrio , voltaremos a discutir sobre algumas portas qunticas que sero de interesse para nosso desenvolvimento , mas novamente voltamos a recomendar uma leitura aprofundada em livros - texto da rea para maiores detalhes .o teleporte quntico ( tq ) foi proposto em 1993 por bennett _este constitui um canal para enviar informao codificada em um estado quntico desconhecido de uma parte ( chamada alice ) para outra ( chamada bob ) separadas espacialmente .o principal resultado do tq que , alm de no ser necessrio conhecer o estado a ser teleportado , no h qualquer limite para a distncia entre os agentes ( exceto pelo canal clssico que deve ser estabelecido entre eles ) .por exemplo , experimentos recentes para implementao de tq atigiram a marca dos km com fibras pticas e km no espao livre .um dos recursos necessrios para que possamos implementar o tq que alice e bob devem compartilhar de qualquer um dos estados emaranhados abaixo so conhecidos na literatura como _ estados de bell _ , onde assumem valores e e onde definimos .o sistema que compe o protocolo de teleporte composto essencialmente por 3 _ q_-bits , onde dois deles esto em posse da alice e um terceiro em posse do bob . o sistema da alice composto por uma partcula do par emaranhado dado na eq .( [ bellstate ] ) , onde a outra est em posse do bob , e uma partcula onde deve estar codificada a informao a ser enviada por alice . considerando que o estado a ser eviada pela alice escrito como , ento o estado inicial do sistema fica escrito como onde em particular adotamos o canal quntico entre alice e bob ( par emaranhado ) como sendo o estado dado na eq .( [ bellstate ] ) com .devido a separao entre alice e bob , alice no pode realizar nenhum tipo de medida na partcula do bob , mas existe uma medida especial que alice pode fazer em suas partculas que possibilita o tq do estado para bob .em suas particulas alice deve realizar uma medida na base de bell , um dos estados ( [ bellstate ] ) .assim , conveniente escrever o estado do sistema numa base onde as partculas da alice estejam escritas na base de bell .fazendo isso ns encontramos \notag \\ & & + \frac{1}{2}\left [ \vert \beta _ { 01}{\rangle}_{a}\left ( a\vert 1{\rangle}+b\vert 0{\rangle}\right ) _ { b}+\vert \beta _ { 00}{\rangle}_{a}\left ( a\vert 1{\rangle}-b\vert 0{\rangle}\right ) _ { b}\right ]\text { \ \ , } \end{aligned}\]]onde fica claro os possveis resultados da alice quando ela realizar uma medida na base .tambm bvio que o estado , para o qual a partcula do bob ir colapsar depois da medida realizada pela alice , depende exclusivamente do resultado da medida da alice .exceto no caso onde o resultado da alice fornece o estado em suas partculas , qualquer outro resultado no pode caracterizar o tq , pois os possveis estados de colapso so diferentes do estado original ..correes do bob para o tq de um estado desconhecido [ cols="^,^",options="header " , ] [ tabelacircuitotqsimples ] importante mencionar que essa exigncia feita no protocolo bbc leva em conta que todos os _ q_-bits devem estar prximos um dos outros , e isso no conveniente se desejamos enviar informao de um lugar a outro .isso evidente , uma vez que no precisamos de um telefone para falar com algum sentado ao nosso lado .no entanto , a proposta do protocolo bbc que ns podemos usar o tq para nos auxiliar em uma computao em computadores qunticos , e nesse cenrio o protocolo bbc conveniente .se desejamos usar o protocolo bbc para enviar comunicao entre duas partes espacialmente separadas , ns podemos considerar o caso onde o estado inicial do sistema no , mas sim , como dado na eq .( [ inicialestado ] ) .dessa forma ns reduzimos nosso circuito de modo que usamos um circuito como no protocolo bbc , mas apenas a partir da linha pontilhada .nesta seo ns pretendemos apresentar a plataforma ibm - qe e discutir sobre algumas propriedades e limitaes de tal plataforma .vamos nos restringir a detalhar apenas aqueles componentes que sero de utilidade para nosso desenvolvimento e implementao do protocolo de teleporte , deixando que o leitor interessado em mais detalhes da plataforma ibm - qe possa usufruir de tal servio pelo link da ref . . o computador quntico da ibm um aparato experimental que baseia - se no modelo de computao quntica via fluxo magntico em q - bits supercondutores .o grupo ibm nos fornece , em sua plataforma , a possibilidade de trabalharmos com um chip contendo q - bits onde podemos manipul - los a nosso critrio com a finalidade de obtermos o resultado de uma dada computao .[ circuitos ] ns podemos ver um esquema de como os q - bits esto dispostos em um chip da ibm e da plataforma do ibm - qe . a fig .[ circuito ] apresenta basicamente a rea de trabalho do ibm - qe , onde seus componentes e suas propriedades sero o foco de nossa discusso agora . .[ circuitos ] na rea de trabalho as informaes sobre como trabalhar com o ibm - qe , bem como algumas revises de elementos de computao quntica , podem ser obtidos na aba _user guide_. a aba _ my scores _ onde estaro armazenados todas os circuitos construdos pelo usurio , de modo que estes no necessariamente precisam ter sido implementados experimentalmente , alm disso , podemos ter acesso aos resultados de todos os experimentos feitos que estaro dispostos de acordo com o circuito implementado .a aba _ composer _ essencialmente onde todo o protocolo deve ser inserido e implementado .ao selecionar essa aba , uma janela ser mostrada onde ser solicitado ao usurio uma forma de acesso ao _ composer _ que basicamente nos permite acessar uma rea onde poderemos usar um processador ideal ou real .enquanto o processador real simula exatamente o que se esperado no experimento que implementa o circuito onde a computao estar sob o efeito do fenmeno decoerncias , o processador ideal no leva isso em conta . mais a frente discutiremos sobre tais efeitos . a direita da plataforma so encontrados ( quando acessamos o processador real ) os seguintes `` botes '' que , agora , passamos a descrever suas respectivas funcionalidades .* _ simulate _ : ao construir o circuito ns podemos testar para ver se h algum erro de projeto no circuito .tais erros possveis so operaes que no podem ser realizadas pelo ibm - qe . recomendvel sempre simular o circuito antes de implement - lo experimentalmente . *_ run _ : uma vez que o circuito passou pelo teste do _ simulate _ sem nenhum erro , podemos usar o _ run _ para implementar o circuito remotamente em um chip da ibm .nessa etapa o projeto ser submetido ao time da ibm e possivelmente entrar em uma fila de espera at ser implementado .* _ new _ : nesta opo um novo circuito pode ser criado . * _ save _ ou _ save as _ : esta opo destinada ao arquivamento do projeto em qualquer etapa de sua construo . * _ results _ : dado a execuo de um protocolo , pode - se verificar os resultados nesta opo . * _ help _ : fornece informaes sobre dvidas sobre como cada um dos componentes ( portas e medidas ) podem ser usados e suas formas de atuar sobre os q - bits .para completar a caracterizao da plataforma , precisamos falar do conjunto de portas que podem ser implementadas pelo ibm - qe , uma vez que so esses os elementos fundamentais da plataforma .[ circuito ] ns podemos ver o conjunto de portas disponveis na plataforma logo abaixo do circuito .esse conjunto de porta , composto por portas de clifford e duas portas no - clifford que so adicionadas afim de conseguirmos um conjunto compledo de portas universais para computao quntica . as portas disponveis so as portas de pauli , a porta hadamard ( ) a porta de fase ( e ) , que so portas que podem ser simuladas eficientemente em um computador clssico , mas que quando adicionada das portas no - clifford ( e ) formam um conjunto de portas nos permite fazer operaes no simuladas eficientemente em um computador clssico .os nicos constituintes do ibm - qe que ainda no mencionamos e que julgamos ser um ponto que deve ser discutido em detalhes o processo de medida no ibm - qe .a medio um ponto importante em cq , principalmente em modelos de cq baseada em medida , onde todo o processo de computao feito por meio de uma sequncia de medidas sobre os q - bits do sistema .o ibm - qe tem a disposio do usurio duas maneiras distintas de fazer medidas ao final de um protocolo , cujos smbolos esto dispostos logo abaixo do circuito na fig .[ circuito ] . o primeiro smbolo ( logo em seguida da palavra _ measure _ ) o smbolo de medida na base computacional .em cq ns temos a opo de recuperar todos os resultados de uma computao possvel em um computador clssico , que por sua vez s faz leituras de e que so usados para codificar alguma informao , onde podemos fazer medidas em cq que so anlogas a essas leituras feitas em um computador clssico .isso graas a possibilidade de fazer medidas na base computacional e .a explicao do no smbolo da medida na base computacional que , historicamente , temos primeiro usado de manipulao de spins por meio de campos magnticos orientados na direo , onde os estados ortogonais nessa direo foram rotulados como e associados a estados de spin e , respectivamente .na linguagem de cq esses rtulos podem ser trocados por e .assim , se estamos simulando computao quntica com spins , como em ressonncia magntica nuclear , uma medida na base computacional corresponde a verificarmos se os spins dos constituintes do sistema esto orientados como o campo magntico ( orientado na direo ) ou no sentido oposto .como resultado , a medida na base computacional nos informar a _ probabilidade _ de obter um dado estado como resultado da computao .por exemplo , considerando um estado como na eq .( [ state2qubits ] ) , uma medida na base computacional nos dar o valor numrico das quantidades , , e .essa uma limitao desse tipo de medida , pois no somos capazes de distinguir entre estados como , pois o sinal indiferente para a probabilidade de medir ou .no entanto , para situaes onde precisamos caracterizar um estado a fim de saber exatamente qual o estado de sada de uma computao , podemos usar uma medida de bloch , que representada pelo smbolo ao lado do smbolo de medida na base computacional .ou das coordenadas do vetor . ] para entender a medida de bloch , precisamos primeiro de recurso para visualizar um estado de um q - bit na esfera de bloch que pode ser vista na fig .[ bloch ] .diferentemente da medida na base computacional , uma medida de bloch nos fornece o valor das coordenadas de um dado estado , que por sua vez podem ser usados para determinar por meio da transformao assim , podemos distinguir com facilidade entre os estados ou quaisquer outros .devido semelhana desse tipo de medida com o conceito de tomografia , em termos especficos da rea ns dizemos que essa medida de bloch uma _ tomografia _ de estados .no entanto , existem situaes onde essa medida fracassa , que quando tentamos fazer a tomografia do estado de dois q - bits emaranhados .o emaranhamento carrega a problemtica de que impossvel caracterizar totalmente o estado de um dos q - bits independente do outro , e como o imb - qe nos fornece estados na esfera de bloch de um nico q - bit , ento no temos como usar o processo de tomografia para determinar o estado emaranhado de 2 q - bits .nesse caso , mais vivel usarmos a medida na base computacional j levando em conta alguma perda de informao sobre quaisquer fases que existam nos estados .ao iniciarmos a elaborao de um protocolo ( circuito quntico ) ns devemos informar para a plataforma do ibm - qe se desejamos simular nosso circuito em um processador _ ideal _ ou se optamos por um processador _real_. esta escolha ir nos direcionar para cenrios distintos de mecnica quntica .no processador _ ideal _ ns realizaremos necessariamente evoluesno sistema sem nenhuma influncia de perturbaes indesejadas , enquanto que no processador _ real _ essas perturbaes no so ignoradas , nos deixando assim mais prximos de um cenrio real de um computador quntico .os efeitos indesejados em computao quntica , bem como em qualquer aparato experimental em mecnica quntica , so efeitos de decoerncia que esto presentes no processador _ real _ do ibm - qe .esses efeitos dependem apenas da forma como nosso sistema interage com o ambiente que o cerca .no podemos , a partir da aba _ user guide _ , obter informaes sobre o tipo de decoerncia que age sobre o chip de 5 q - bits , mas algumas informaes so fornecidas na aba _ composer _quando escolhemos a opo de processador _real_. uma delas , e no momento mais relevante , que cada q - bit do chip interage de forma diferente com o ambiente , alguns interagindo mais e outros intragindo menos .uma das contribuies desse artigo justamente estudar o tipo de decoerncia que age sobre os q - bits e passaremos a discutir isso nesse momento .para estudar os efeitos da decoerncia sobre o chip de 5 q - bits do ibm - qe , ns usamos o seguinte protocolo .a primeira parte do circuito preparar um estado de superposio da forma , que pode ser preparado apenas atuando a porta hadamard no estado de entrada .o segundo passo deixar agora o sistema evoluir apenas sobre a ao dos efeitos de decoerncia , onde podemos fazer isso apenas utilizando as portas identidade ( primeira porta de cor laranja na fig . [ circuito ] abaixo do circuito ) e ao final do processo fazemos uma medida na base computacional ( o circuito usado mostrado na fig . [ circuito1 ] ) . ns utilizamos esse processo e acompanhamos como as amplitudes de probabilidade de obter como resultado o estado e , onde o resultado mostrado no grfico da fig .[ graph1 ] . a partir dos resultados esboados no grfico da fig .[ graph1 ] ns podemos ver que o efeito do ambiente intensificar a probabilidade de obter o estado de entrada , sendo esse um efeito muito parecido com o efeito de emisso espontnea para uma evoluo onde o estado fundamental do sistema .um exemplo de evolues desse tipo por meio de um hamiltoniano , que o hamiltoniano de interao de um spin com um campo magntico na direo , por exemplo .no entanto , esse apenas um exemplo de evoluo que nos permite explicar o que est acontecendo .o que podemos dizer de forma genrica , j que no sabemos explicitamente quais campos atuam sobre os q - bits , que o efeito do ambiente sobre os q - bits do chip da ibm o de forar o sistema para o estado .agora vamos fazer uma aplicao do que temos desenvolvido at agora usando o circuito do teleporte quntico presente na fig .[ bbcprotocol ] .aqui ns ilustraremos nossa aplicao estudando a implementao do teleporte dos estados e .os circuitos que implementam o teleporte desses estados so mostrados na fig .[ circuitotele ] .( circulado em azul ) e ( circulado em vermelho ) .os estados a serem teleportados so preparados no q - bit e o recurso preparado nos q - bits e , onde o q - bit est com a alice e o q - bit est com o bob . a porta ( hadamard ) no circuito circulado em azul ( vermelho ) usado para preparar o estado a ser teleportado . ]ns simulamos os circuitos da fig .[ circuitotele ] com o processador _ real _ , onde um detalhe deve ser levado em considerao . as portas cnot que so implementadasno circuito usando um processador real no podem ter qualquer q - bit como o alvo , apenas o q - bit .como o aparato experimental baseado no fluxo magntico em q - bits supercondutores , essa uma restrio do modelo que precisa ser levado em considerao na hora de construir um circuito devido o aparato fsico usado .quanto ao resultado da implementao do circuito , podemos usar diretamente a eq .( [ computatedstate ] ) para mostrar que o estado do sistema antes da medida para o teleporte do estado deve ser dado por \text { , } \\\ ] ] onde o estado teleportado sem necessidade de correo para os resultados ou , enquanto que este teleportado a menos de uma correo pela porta quando os resultados forem ou . a fig .[ estado1 ] mostra os resultados fornecidos pelo ibm - qe quando simulamos o circuito em um processador ideal e real .nele podemos ver que o estado teleportado com certeza em , pelo menos , metade das vezes que simulamos o protocolo , com e contribuindo com de acerto cada um . .[ tabelas ] para o estado ns temos o seguinte estado final \text { , } \ ] ] onde claramente vemos que a probabilidade de sucesso , novamente sem necessidade de correo , de correspondente aos resultados ou .uma forma equivalente de escrever o estado acima como \notag \\ & & + \frac{1}{2}\left [ \left\vert 100\right\rangle _ { \text{ab}}-\left\vert 101\right\rangle _ { \text{ab}}-\left\vert 110\right\rangle _ { \text{ab}}+\left\vert 111\right\rangle _ { \text{ab}}\right ] \text{,}\end{aligned}\ ] ] onde fica claro a distribuio de probabilidades obtidas na fig .[ estadomais ] no caso onde simulamos o circuito com um processador _ ideal_. como mencionamos ,note que o estado uma superposio de estados da base computacional onde temos explicitamente fases ( sinais diferentes ) entre os estados , mas nos resultados fornecido pelo ibm - qe presentes na fig .[ estadomais ] ns no temos essa informao .neste paper ns apresentamos algumas caractersticas e funcionalidades do computador quntico da ibm , bem como da plataforma ibm - qe .ressaltamos suas sutilezas , mostrando que ainda existem limitaes no ibm - qe que so intrnsecas do aparato experimental e do modelo de computao utilizado para desenvolv - lo .mesmo sem termos conhecimento dos detalhes tcnicos acerca dos efeitos da decoerncia sobre o chip de 5 q - bits da ibm , ns pudemos identificar um tipo de decoerncia existente no sistema analisando um caso particular de evoluo .nossa proposta de estudar fontes de decoerncia no ibm - qe pode ser reanalisada para outros estados podem ser construdos , de modo que tambm outras evolues podem ser pensadas para tentarmos identificar a existncia de outras formas de decoerncia no ibm - qe .devido ao fato de no termos uma ferramenta didtica eficiente para apresentar conceitos como teleporte , robustez de circuitos qunticos contra decoerncia , dentre outros conceitos , acreditamos que a plataforma da ibm - qe uma proposta didtica para ser aplicada em disciplinas especficas da rea de computao e informao quntica .o teleporte apenas um exemplo que pode ser discutido , mas existe uma gama de outros protocolos que podem ser trabalhado no intuito de introduzir certos conceitos e problemas em cq de uma forma mais didtica e ldica .gostaramos de agradecer a equipe da ibm e aos membros do projeto _ ibm quantum experience _ que , sem dvidas , tem contribudo no somente para o desenvolvimento desse trabalho mas com toda a comunidade de computao e informao quntica .em particular , agradeo a dmaris costa frutuoso , da universidade regional do cariri , por ler o material e sugerir mudanas no mesmo .agradecemos tambm ao conselho nacional de desenvolvimento cientfico e tecnolgico ( cnpq ) e ao instituto nacional de cincia e tecnologia de informao quntica ( inct - iq ) pelo suporte financeiro a esse projeto . , group22 : proceedings of the xxii international colloquium on group theoretical methods in physics , eds .s. p. corney , r. delbourgo , and p. d. jarvis , pp .32 - 43 ( cambridge , ma , international press , 1999 ) .e - print arxiv : quant - ph/9807006v1 ( 1998 ) .d. g. cory , r. laflamme , e. knill , l. viola , t. f. havel , n. boulant , g. boutis , e. fortunato , s. lloyd , r. martinez , c. negrevergne , m. pravia , y. sharf , g. teklemariam , y.s .weinstein and w.h .zurek , fortschritte der physik * 48 * , 875 ( 2000 ) .
o anncio de um computador quntico que pode ser acessado de forma remota por qualquer pessoa a partir de seu laptop um acontecimento de grande importncia para os cientistas da computao quntica . neste trabalho ns apresentamos o computador quntico da _ international business machines _ ( ibm ) e sua plataforma _ ibm quantum experience _ ( ibm - qe ) como uma proposta didtica em estudos de computao e informao quntica e , no menos importante , como divulgao cientfica de tal anncio feito pela equipe da ibm . apresentamos as principais ferramentas ( portas qunticas ) disponveis no ibm - qe e , por meio de uma simples estratgia , discutimos acerca de uma das fontes de decoerncia nos chips de 5 q - bits da ibm . como exemplo de aplicao do nosso estudo , ns mostramos como implementar o teleporte quntico usando o ibm - qe .
the problem of _ compressive phase retrieval _ ( cpr ) is generally stated as the problem of estimating a -sparse vector from noisy measurements of the form for , where is the sensing vector and denotes the additive noise . in this paper , we study the cpr problem with specific sensing vectors of the form where and are known . in words , the measurement vectors live in a fixed low - dimensional subspace ( i.e , the row space of ) . these types of measurements can be applied in imaging systems that have control over how the scene is illuminated ; examples include systems that use structured illumination with a spatial light modulator or a _ scattering medium _ . by a standard lifting of the signal to , the quadratic measurements ( [ eq : measurements ] )can be expressed as with the linear operator and defined as {i=1}^{n } & & \text{and } & & \mc a:\mb x\mapsto\mc w\left(\mb{\varpsi}\mb x\mb{\varpsi}^{\msf t}\right),\end{aligned}\ ] ] we can write the measurements compactly as our goal is to estimate the sparse , rank - one , and positive semidefinite matrix from the measurements ( [ eq : lifted - measurements ] ) , which also solves the cpr problem and provides an estimate for the sparse signal up to the inevitable global phase ambiguity .[ [ assumptions ] ] assumptions + + + + + + + + + + + we make the following assumptions throughout the paper . 1 .[ asm : a1 ] the vectors are independent and have the standard gaussian distribution on : 2 .[ asm : a2 ] the matrix is a _ restricted isometry _matrix for -sparse vectors and for a constant ] .the notation is used when for some absolute constant . for any matrix , the frobenius norm , the nuclear norm , the entrywise -norm , and the largest entrywise absolute value of the entriesare denoted by , , , and , respectively . to indicate that a matrix is positive semidefinite we write .the main challenge in the cpr problem in its general formulation is to design an accurate estimator that has optimal sample complexity and computationally tractable . in this paperwe address this challenge in the special setting where the sensing vectors can be factored as ( [ eq : nested - sensing ] ) .namely , we propose an algorithm that * provably produces an accurate estimate of the lifted target from only measurements , and * can be computed in polynomial time through efficient convex optimization methods .several papers including have already studied the application of convex programming for ( non - sparse ) phase retrieval ( pr ) in various settings and have established estimation accuracy through different mathematical techniques .these phase retrieval methods attain nearly optimal sample complexities that scales with the dimension of the target signal up to a constant factor or at most a logarithmic factor . however , to the best of our knowledge , the exiting methods for cpr either lack accuracy and robustness guarantees or have suboptimal sample complexities .the problem of recovering a sparse signal from the magnitude of its subsampled fourier transforms is cast in as an -minimization with non - convex constraints .while shows that a sufficient number of measurements would grow quadratically in ( i.e. , the sparsity of the signal ) , the numerical simulations suggest that the non - convex method successfully estimates the sparse signal with only about measurements .another non - convex approach to cpr is considered in which poses the problem as finding a -sparse vector that minimizes the residual error that takes a quartic form . a local search algorithmcalled gespar is then applied to ( approximate ) the solution to the formulated sparsity - constrained optimization .this approach is shown to be effective through simulations , but it also lacks global convergence or statistical accuracy guarantees .an alternating minimization method for both pr and cpr is studied in .this method is appealing in large scale problems because of computationally inexpensive iterations .more importantly , proposes a specific initialization using which the alternating minimization method is shown to converge linearly in noise - free pr and cpr .however , the number of measurements required to establish this convergence is effectively quadratic in . in and the -regularized form of the trace minimization proposed for the cpr problem .the guarantees of are based on the restricted isometry property of the sensing operator {i=1}^{n} ] denote the support set of the -sparse target . define to be a matrix that is identical to over the index set and zero elsewhere .by optimality of and feasibility of in ( [ eq : estimate ] ) we have where the last line follows from the fact that and have disjoint supports .thus , we have now consider a decomposition of as the sum such that for the matrices have disjoint support sets of size except perhaps for the last few matrices that might have smaller supports . more importantly, the partitioning matrices are chosen to have a decreasing frobenius norm ( i.e. , ) for .we have where the chain of inequalities follow from the triangle inequality , the fact that by construction , the fact that the matrices have disjoint support and satisfy ( [ eq : decomposition - delta ] ) , the bound ( [ eq : delta - tail ] ) , and the fact that and are orthogonal .furthermore , we have where the first term is obtained by the cauchy - schwarz inequality and the summation is obtained by the triangle inequality . because by definition , the triangle inequality and the fact that and are feasible in ( [ eq : estimate ] ) imply that .furthermore , lemma [ lem : psiinnerproduct ] below which is adapted from ( * ? ? ?* lemma 2.1 ) guarantees that for and we have .therefore , we obtain where the chain of inequalities follow from the lower bound in ( [ eq : rip - psipsi ] ) , the bound ( [ eq : psi(e0+e1)psi ] ) , the upper bound in ( [ eq : rip - psipsi ] ) , the bound ( [ eq : tail - less - head ] ) , and the fact that . if , then we have and thus adding the above inequality to ( [ eq : delta - tail ] ) and applying the triangle then yields the desired result .suppose that and have unit frobenius norm .using the identity and the fact that and have disjoint supports , it follows from ( [ eq : rip - psipsi ] ) that the general result follows immediately as the desired inequality is homogeneous in the frobenius norms of and .[ lem : projected - estimator ] let be a closed nonempty subset of a normed vector space .suppose that for we have an estimator , not necessarily in , that obeys .if denotes a projection of onto , then we have .
we propose a robust and efficient approach to the problem of compressive phase retrieval in which the goal is to reconstruct a sparse vector from the magnitude of a number of its linear measurements . the proposed framework relies on constrained sensing vectors and a two - stage reconstruction method that consists of two standard convex programs that are solved sequentially . in recent years , various methods are proposed for compressive phase retrieval , but they have suboptimal sample complexity or lack robustness guarantees . the main obstacle has been that there is no straightforward convex relaxations for the type of structure in the target . given a set of underdetermined measurements , there is a standard framework for recovering a sparse matrix , and a standard framework for recovering a low - rank matrix . however , a general , efficient method for recovering a jointly sparse and low - rank matrix has remained elusive . deviating from the models with generic measurements , in this paper we show that if the sensing vectors are chosen at random from an incoherent subspace , then the low - rank and sparse structures of the target signal can be effectively decoupled . we show that a recovery algorithm that consists of a low - rank recovery stage followed by a sparse recovery stage will produce an accurate estimate of the target when the number of measurements is , where and denote the sparsity level and the dimension of the input signal . we also evaluate the algorithm through numerical simulation . # 1 # 1 # 1 # 1 # 1 # 1
imaging using intensity - only ( or phaseless ) measurements is challenging because much information about the sought image is lost in the unrecorded phases .the problem of recovering an image from intensity - only measurements , known as the phase retrieval problem , arises in many situations in which it is difficult , or impossible , to measure and record the phases of the signals received at the detectors .this is the case , for example , in imaging from x - ray sources , or from optical sources , where one seeks to reconstruct an image from the spectral intensities .this problem arises in various fields , including crystallography , optical imaging , astronomy , and electron microscopy , and the images to be formed from intensity - only measurements vary from galaxies to microscopic objects . in this paper, we consider the problem in active array imaging when the sensors only record the intensities of the signals .this can be the case because less expensive sensors are used , the data need to be collected faster , or because the phases are difficult to measure at the frequencies used for imaging . for frequencies above 10 ghz or so , it is difficult at present to record the phase of the scattered signals directly .there are at least two different approaches for imaging using intensity - only measurements . in the first approach, the phases are retrieved from the experimental set - up before doing the imaging .this is done , for example , in holographic based methods where an interferometer records the interference pattern between a reference signal and the analyzed signal .the interferometric image depends on the phase difference between the two signals and , hence , holds the desired phase information .an experimental strategy is also proposed for diffraction tomography in , which requires measurements of the signal on two planes spaced at distances smaller than a wavelength .such techniques are , however , hard to implement in practice .the second approach carries out imaging directly , without previous estimation of the missing phases , using reconstruction algorithms .a frequently used method is based on alternating projection algorithms , proposed by gerschberg and saxton ( gs ) .this method uses two intensity measurements to form the of pixels .this is so , because it requires the solution of a optimization problem with unknowns , instead of the original one with unknowns . in other words, it transforms the phase retrieval problem into one of recovering a rank - one matrix , which leads to very large optimization problems that are not feasible if the images are large . as a consequence ,it is desirable to have other approaches that guarantee convergence to the exact solution and , at the same time , keep the size of the problem small so the solution can be found more efficiently .it is important that any such approaches be robust to noise .the main contribution of this paper is the introduction of a new strategy for imaging when only the intensities are recorded .this strategy has the desired properties mentioned above : exact recovery , robustness with respect to noise , and efficiency for large problems .we show that imaging of a small number of localized scatterers can be accomplished using the _ time reversal operator _ , where is the full array response matrix of the imaging system .we show that the _ time reversal operator _ can be obtained from the total power recorded at the array using an appropriate illumination strategy and the polarization identity .once the _ time reversal operator _ has been obtained , we show that the location of the scatterers can be determined using its singular value decomposition ( svd ) .we consider two methods that make use of the svd of .the first method finds the locations of the scatterers from the perspective of sparse optimization , using a multiple measurement vector ( mmv ) approach .the second method finds the locations of the scatterers by beamforming .we use the music ( multiple signal classification ) method , which is equivalent to beamforming , using the significant singular vectors as illuminations .both methods recover the location of the scatterers exactly in the noise - free case and are robust with respect to additive noise .the imaging methods described here are efficient , do not need prior information about the object to be imaged , and guarantee exact recovery .we note , however , that recording all the intensities needed for the _ time reversal operator _ may not be possible .indeed , the number of illuminations involved is , where is the number of transducers in the array . in order to simplify the data acquisition process, we also propose two methods that reduce the number of illuminations needed for imaging .the first method selects pairs of transducers randomly , and finds the missing entries in the _ time reversal operator _ via matrix completion .this method reduces the number of illuminations to one half .the second method does not select the transducers randomly , but uses only a few transducers at the edges of the array .this method reduces the number of illuminations even more .the paper is organized as follows . in section [ sec :model ] , we formulate the active array imaging problem using intensity - only measurements . in section [ sec : timereversalop ] , we show how to obtain the _ time reversal operator _ when only the intensities of the signals are recorded at the array , and we discuss the relation of the _ time reversal operator _ with the full data matrix ( that also contains the information about the phases of the signals ) .we also discuss in section [ sec : timereversalop ] imaging with an incomplete set of illuminations , i.e. , when some entries of the _ time reversal operator _ are missing . in section[ sec : methods ] , we briefly review mmv and music methods , the two imaging methods used in the paper to form the images . in section [ sec : numerics ] , we show the results of numerical experiments .section [ sec : conclusions ] contains our conclusions .in active array imaging we seek to locate the positions and reflectivities of a set of scatterers using the data recorded on an array . by an active array ,we mean a collection of transducers that emit spherical wave signals from positions and record the echoes with receivers at positions .the transducers are placed at distance between them , which is of the order of the wavelength , where is the wave speed in the medium and is the frequency of the probing signal . in this paper , we focus on imaging of localized scatterers , which means that the scatterers are very small compared to the wavelength ( point - like scatterers ) . furthermore , for ease of exposition , we assume that multiple scattering between the scatterers is negligible . the imaging methods considered here can be implemented when multiple scattering is important too ( see for details ) .let the active array with transducers at positions , , be located on the plane .assume that there are point - like scatterers in a image window ( iw ) , which is at a distance from the array .we discretize the iw using a uniform grid of points , .the scatterers have reflectivities , and are located at positions , which we assume coincide with one of these grid points . if the scatterers are far apart or the reflectivities are small , interaction between scatterers is weak and multiple scattering can be neglected .then , with the born approximation , the response at due to a narrow - band pulse of angular frequency sent from and reflected by the scatterers is given by where is the green s function that characterizes wave propagation from to in a homogeneous medium .to write the data received on the array in a more compact form , we define the _ green s function vector _ at location in iw as ^t\ , , \ ] ] where means the transpose .this vector can also be interpreted as the illumination vector of the array targeting the position .we also define the true _ reflectivity vector _^t\in\mc^k ] in are the signals sent from each of the transducers in the array .if only the intensities of the signals are available , the imaging problem is to determine the location and reflectivities of the scatterers from the absolute value of each component in , i.e. , from the intensity vectors in , the superscript denotes conjugate transpose .this problem is , however , nonlinear and , therefore , there is much interest in finding algorithms that give the true global solution effectively .in this paper , we propose a novel imaging strategy for the case in which only data of the form is recorded and known .the main idea behind the approach proposed here is that we can use a related matrix to the full response matrix that has good properties for imaging and can be obtained from data of the form .this related matrix is the _ time reversal matrix _ . in this section, we will show first how to obtain it from the intensity vectors using the polarization identity , and how to use it for imaging using its singular value decomposition .the key point in active array imaging is that we control the illuminations that probe the medium and , therefore , we can design illumination strategies favorable for imaging . in our case, we seek an illumination strategy from which can obtain the _ time reversal matrix _ from .suppose we can put any illumination on the array , but we can only measure quadratic measurements as in , i.e. , only the intensity of the data can be recorded . in that case , we also have access to the quadratic form indeed , note that only the _ total power _ received at the array is involved in . in , is the intensity of the signal received at the i - th transducer .note that represents a self - adjoint transformation from the _ illumination space _ to the _ illumination space _ .the entries of this square matrix can be obtained from the total power received at the array using multiple illuminations as follows .the i - th entry in the diagonal , , is just the total power received at the array when only the i - th transducer of the array fires a signal . in other words , , where the illumination vector ^t ] , where is the noiseless intensity received on the i - th receiver , and is a parameter that measures the noise strength .if we define the signal - to - noise ratio at the i - th receiver ( ) as the mean to standard deviation of the received power , then the on each receiver is the same , and is given by therefore , the signal - to - noise ratio for the total power is if the intensity does not vary too dramatically from one receiver to another . for example , it suffices to assume there exists so that it is straightforward to see that if the intensity at the i - th receiver is a random variable uniformly distributed on ] whose column corresponds to the _ effective source vector _ whose components are given by under illumination , .this matrix variable has columns that share the same sparse support but possibly have different nonzero values due to the different illuminations .the mmv formulation for active array imaging is to solve for from the matrix - matrix equation where is the sensing matrix , and ] being the matrix whose columns are the full data vectors generated by the illuminations ( up to a global phase ) , that is , . in and , ] defined in the support recovered in the first step .note that now and , therefore , has small dimensions .following , we could obtain from intensity - only measurements by solving where is a linear map from to .an estimate for could be found , in principle , by solving by least squares .note , however , that is of low rank ( in fact rank since it is defined via an outer - product ) , so we obtain from the following affine rank minimization problem in order to take advantage of the additional information on the unknown . once found from this optimization problem , we can obtain the amplitude of the reflectivities by taking on the support .however , is an np - hard problem and , therefore , there is no simple algorithm that gives the true global solution effectively .therefore , we replace by the nuclear norm in the objective function of , and consider the following optimization problem as given in the nuclear norm is the sum of the singular values of the matrix while the rank is the number of nonzero singular values and , hence , it can be used as a convex surrogate for the rank functional .problem is now convex and can be solved in polynomial time .to solve , we follow and use the gradient descent method with singular value thresholding , as outlined below in algorithm [ algominimumlsqrank ] . in algorithm[ algominimumlsqrank ] , the soft - thresholding operation is given by where is the vector of positive singular values arranged in descending order , is the thresholding parameter , superscript means positive part , and and are the orthogonal matrices from the svd of .we stress that through step one , i.e. by using music to locate the scatterers , we have effectively reduce the dimension of the unknown in and , thus , the optimization problem is very easy to solve .set and , and pick the initial value for step size . compute weight .compute . compute the matrix .set .compute . in algorithm [ algominimumlsqrank ] ,the adjoint operator is given by which is found from the relation .we have seen in our numerical experiments that replacing the soft - thresholding operation by a rank enforcement , that is , setting at each iteration in algorithm [ algominimumlsqrank ] , also gives excellent results .this can be understood as solving the least squares problem with the rank constrain this problem is , however , non - convex due to the non - convexity of the set of low - rank matrices and , therefore , it might not converge to the true solution in general .in this section we present numerical simulations in two dimensions .the linear array consists of transducers that are one wavelength apart .scatterers are placed within an iw of size which is at a distance from the linear array .the amplitudes of the reflectivities of the scatterers and their phases are set randomly in each realization .the scatterers are within an iw that is discretized using a uniform lattice with points separated by one wavelength .this results in a uniform mesh .hence , we have unknowns . in all the images shown below , we normalize the spatial units by the wavelength . figure [ fig : nonoise ] shows the images obtained with music ( middle column ) and with the mmv formulation ( right column ) using noisless data .the top and bottom rows are two different configurations with and scatterers , respectively .the left column shows the distribution of scatterers to be recovered .when there is no noise in the data , both methods recover the positions and reflectivities of the scatterers exactly .the exact locations of the scatterers in these images are indicated with small white dots . [ cols="^,^,^ " , ]we give a novel approach to imaging localized scatterers from intensity - only measurements .the proposed approach relies on the evaluation of the _ time reversal matrix _ which , we show , can be obtained from the total power recorded at the array using an appropriate illumination strategy and the polarization identity .once the _ time reversal matrix _ is obtained , the imaging problem can be reduced to one in which the phases are known and , therefore , one can use phase - sensitive imaging methods to form the images .these methods are very efficient , do not need prior information about the desired image , and guarantee the exact solution in the noise - free case .furthermore , they are robust with respect to noise . at the algorithmic level, a key property of the proposed approach is that it significantly reduces the computational complexity and storage consumption compared to convex approaches that replace the original vector problem by a matrix one and , therefore , create optimization problems of enormous sizes . with our approach ,the algorithms keep the original unknowns of the imaging problem , where is the number of pixels of the sought image , and hence , images of larger sizes can be formed . as recording all the intensities that are needed for obtaining the _ time reversal matrix_ can be cumbersome , we also give two solutions that simplify the data acquisition process .they greatly reduce the number of illuminations needed for the proposed imaging strategy , but they increase the sensitivity to noise . we illustrated the performance of the proposed strategy with various numerical examples .
we propose a new strategy for narrow band , active array imaging of localized scatterers when only the intensities are recorded and measured at the array . we consider a homogeneous medium so that wave propagation is fully coherent . we show that imaging with intensity - only measurements can be carried out using the _ time reversal operator _ of the imaging system , which can be obtained from intensity measurements using an appropriate illumination strategy and the polarization identity . once the time reversal operator has been obtained , we show that the images can be formed using its singular value decomposition ( svd ) . we use two svd - based methods to image the scatterers . the proposed approach is simple and efficient . it does not need prior information about the sought image , and guarantees exact recovery in the noise - free case . furthermore , it is robust with respect to additive noise . detailed numerical simulations illustrate the performance of the proposed imaging strategy when only the intensities are captured .
in scientific computations , expectations with respect to probabilities , induced by continuous time processes , are often replaced by monte carlo averages over independent trajectories . for diffusions , generated bystochastic differential equations ( sdes ) , the trajectories are usually approximated numerically ( see e.g. ) . the accuracy assessment of such numerical procedures is a well studied topic and the available theory establishes and quantifies the convergence of the approximations to the actual solution in a variety of modes , depending on the properties of the sde coefficients .this in turn typically suffices to claim convergence of expectations for path functionals , continuous in an appropriate topology , but unfortunately , may not apply to discontinuous functionals , some of which arise naturally in applications .one important example of such a functional is the hitting time of a domain boundary. let be the diffusion process on , generated by the it sde where is the brownian motion and the coefficients and are functions , assumed to satisfy the regularity conditions , guaranteeing existence of the unique strong solution ( see e.g. ) .the hitting time of the level is where .thus is an extended random variable with values in the polish space , endowed with the metric , .consider a family of continuous processes , generated by a numerical scheme with the time step parameter ( such as e.g. the euler - maruyama recursion and below ) and suppose that approximates the diffusion in the sense of weak convergence .more precisely , let be the space of real valued continuous functions on , endowed with the metric then converges weakly to , if for any bounded and continuous functional such convergence can often be established using the techniques , developed in e.g. , , .in particular , if is a functional , almost surely continuous with respect to the measure induced by , then implies for any continuous and bounded function . in other words ,the weak convergence of the processes implies the weak convergence of the random variables or , equivalently , the convergence of the probability distribution functions for any , at which the distribution function is continuous .let us now take a closer look at the hitting time viewed as a functional on . clearly is discontinuous at some paths : for example , if and , then as , but and for all .on the other hand , is continuous at any , which either upcrosses : or downcrosses : if this type of paths is typical for the diffusion , i.e. if the set of all such paths is of full measure , induced by the process , then is essentially continuous and the weak convergence still implies the weak convergence . however , if is an accessible absorbing boundary of , the paths which hit can not leave it at any further time . for such diffusions is discontinuous on a set of positive probability and the weak convergence can not be directly deduced from .hitting times play an important role in applied sciences , such as physics or finance , and since their exact probability distribution can not be found in a closed form beyond special cases , practical approximations are of considerable interest .there are two principle approaches to compute such approximations .one is based on the fact that the expectation of a given function of the hitting time solves the dirichlet boundary problem for an appropriate pde , and thus the approximations can be computed using the generic tools from the pde numerics .sometimes , the particular structure of the emerging pde can be exploited to calculate expectations of special functions of hitting times , such as moments ( as e.g. in the linear programming approach of ) .the probabilistic alternative is to use the monte carlo simulations , in which the diffusion paths are approximated by numerical solvers .typically the diffusions are simulated on a discrete grid of points and the evaluation of the hitting times requires construction of the continuous paths through an interpolation .the naive approach is to use the general purpose interpolation techniques , such as the one used in our paper ( see below ) .better results are obtained if the possibility of having a hit between the grid points is taken into account as in e.g. , , , .the convergence analysis of the approximations of the hitting times , based on the various numerical schemes , appeared in , , , .the results obtained in these works , assume ellipticity or hypoellipticity of the diffusion processes under consideration , which corresponds to the case of non - absorbing boundary in the preceding discussion .the analysis beyond these non - degeneracy conditions appears to be a much harder problem . in this paperwe consider a particular diffusion on , with an absorbing boundary at .as explained above , the absorption time in this case is a genuinely discontinuous functional of the diffusion paths , which makes the convergence analysis of the approximations a delicate matter .we propose a simple approximation procedure , based on the euler - maruyama scheme , and prove its weak consistency . in the next sectionwe formulate the precise setting of the problem and state the main result , whose prove is given in section [ sec-3 ] .the results of numerical simulations are gathered in section [ sec - num ] and some supplementary calculations are moved to the appendices .consider the diffusion process , generated by the it sde where , and are constants and is the brownian motion , defined on a filtered probability space , satisfying the usual conditions .this sde has the unique strong solution ( proposition 1 in ) and is known in mathematical finance as the constant elasticity of variance ( cev ) model ( see e.g. ) . for , it is also feller s branching diffusion , being the weak limit of the galton - watson branching processes under appropriate scaling .the process is a regular diffusion on and a standard calculation reveals that is an absorbing ( or feller s exit ) boundary ( see 6 , ch .15 in ). we will denote by the corresponding family of markov probabilities with , induced by on the measurable space with the metric .consider the continuous time process , , which satisfies the euler - maruyama recursion at the grid points and is piecewise linear otherwise : ,\ ] ] where , is a small time step parameter and is a sequence of i.i.d . random variables. since the diffusion coefficient of degenerates and is not lipschitz on the boundary , this sde does not quite fit the standard numerical frameworks such as or .nevertheless the scheme does approximate the solution of in the sense of the weak convergence of measures , as was recently shown in ( see also , , ) .consequently , for any -a.s .continuous functional {w } \phi(x),\ ] ] where stands for the the weak convergence , defined in .since a typical trajectory of oscillates around the level after crossing it , the functional is -a.s .continuous for ( see lemma [ lem1 ] below ) and hence as .this argument , however , does not apply to , since it is _ essentially _ discontinuous , as discussed in the introduction . leaving the question of convergence open, we shall prove the following result [ thm ] for any , note that is a continuous functional for any fixed and hence can be seen as a mollified version of the discontinuous .the parameter controls the mollification , relatively to the step - size parameter of the euler - maruyama algorithm . in practical terms ,the convergence provides theoretical justification for the procedure , in which the approximate trajectory , generated by and , is stopped not at , but at , which only approaches zero as .our method does not quantify the convergence in terms of e.g. rates , but the numerical experiments in section [ sec - num ] indicate that this stopping rule produces practically adequate results .as will become clear from the proof , our approach exploits the local behavior of the sde coefficients near the boundary and can be applied to the more general one - dimensional diffusions of the form , for which a weakly convergent numerical scheme is available .for example , all the arguments in the proof of theorem [ thm ] directly apply to the diffusion , whose coefficients and have a similar local asymptotic at as the cev model .the sde is a study case , which seems to capture the essential difficulties of the problem , related to the degeneracy of the sde coefficients on the absorbing boundary .it is a convenient choice , since the weak convergence , being the starting point of our approach , has been already established for the cev model in , and the explicit formulas for the probability density of are available , allowing to carry out the numerical experiments .the proof , inspired by the approach of s.ethier , is based on the following observation ( a variation of billingsley s lemma , see proposition 6.1 ) [ lem : bil ] let , and be borel probability measures on a metric space such that as .let be a separable metric space with metric , and suppose that , and are borel measurable mappings of into such that 1 .[ i ] is -a.s .continuous for all 2 .[ ii ] , -a.s .[ iii ] for every , for an increasing real sequence .then as .let be a continuous with respect to bounded real valued function on , then where and denote the expectations with respect to and respectively . since is continuous and , -a.s ., the last term vanishes as by the dominated convergence .moreover , since for any fixed , is -a.s .continuous and as , consequently , where the latter equality holds by .the claim follows by arbitrariness of .let us now outline the plan of the proof . in our context ,the probability measures , induced by the family , play the role of and by proposition [ prop ] they converge weakly to the law of the diffusion .since the diffusion coefficient of is positive , away from the boundary point 0 , is a -a.s .continuous functional ( lemma [ lem1 ] ) and hence of proposition [ lem : bil ] holds . on the other hand , for any ( lemma [ lem2 ] ) , which implies of proposition [ lem : bil ] .the more intricate part is the convergence , which is verified in lemma [ lem3 ] , using the particular structure of the sde .the statement of theorem [ thm ] then follows from proposition [ lem : bil ] . 0.1 in the following resultis essentially proved in : [ prop ] the processes , defined by and , converge weakly to the diffusion , defined by , as . for the process , obtained by piecewise constant interpolation of the points generated by the recursion, the claim is established in theorem 1.1 in .the extension to the piecewise linear interpolation is straightforward .[ lem1 ] for all , is a -a.s .continuous functional on .we shall prove the claim for completeness and the reader s convenience .for and , let and define we shall first show that is continuous on , i.e. that and then check that to this end , note that if and , then for all ( recall for ) .thus implies and hence .since is arbitrary , , i.e. holds .now take , such that . if , the claim obviously holds by continuity of .if , fix an such that .since we have and thus . on the other hand , as , for any , there is an , such that .it follows that and hence . by arbitrariness of , we conclude that and holds .it is left to show that holds .the diffusion satisfies the strong markov property and thus ( we write for ) , and hence the required claim holds , if .take now to be a scale function of the diffusion , i.e. a solution to the equation , where is the generator of : it is well known , e.g. , that we can take it to be positive and increasing , specifically , for , can be taken .the process is a nonnegative local martingale and thus a supermartingale ( e.g. p. 197 ) .the random variable is a bounded stopping time and by the optional stopping theorem we have for any by the definition of and path continuity of , it follows that , and since is increasing , we have that .thus it follows from that for all and , consequently , {t\wedge \tau_{{\varepsilon}-}}=0 ] , on the set , -a.s .the obtained contradiction implies , as claimed .[ lem2 ] for any . if , then and hence and the claim follows , since is arbitrary . if , then for an , and thus . for sufficiently small , andthe claim follows .let be the probability on the space , carrying the sequence ( see ) and denote by the markov family of probabilities , corresponding to the discrete time process with .since the process is piecewise linear off the grid , the condition of proposition [ lem : bil ] follows from [ lem3 ] for any and roughly speaking , means that a trajectory of , which approaches the boundary , is very likely to hit it .this seemingly plausible statement is not at all obvious , since the coefficients of our diffusion decrease to zero near this boundary , making it hard to reach . by letting the level decrease to zero at a particular rate allows to approximate expectations of the hitting times by those of , which in turn can be estimated using their relations to the corresponding boundary value problems . in what follows , , , etc .denote unspecified constants , independent of and , which may be different in each appearance .define the crossing times of level \right\ } , \quad a\in { \mathbb r}_+,\ ] ] with .the sequence , is a strong markov process and is a stopping time with respect to its natural filtration .since is piecewise linear , and thus by the triangle inequality since for , it follows }{\mathbb{p}}_{z}\big(\nu_{\delta^\beta } > \eta'\big),\end{aligned}\ ] ] where ( assuming is small enough ) .further , and thus holds , if we show }{\mathbb{e}}_z\big(\nu_{\delta^\beta}\wedge \nu_{1}\big)=0,\ ] ] and }{\mathbb{p}}_{z}\big(\nu_{\delta^\beta}>\nu_{1}\big)=0.\ ] ] we shall use the regularity properties of the function \ ] ] near the boundary point 0 , summarized in the appendix [ a ] .in particular , is continuous on the interval ] .note , however , that the derivatives of explode at the boundary point , which is related to the possibility of absorption .we shall extend the domain of to the whole by continuity , setting for ] .now we shall bound the residual terms in . by lemma [ lem : a2 ] , and hence similarly , and by corollary [ corb ] ( applied with ) to bound , note that by lemma [ lem : a2 ] , and thus where the latter inequality holds , since on the set . since is between and for , we have and thus on the set , on the set we have note that for , and , such that , applying this inequality to on the set we get where the inequalities hold for all sufficiently small and we used the bounds and .consequently , on the set plugging the bounds and into and applying the corollary [ corb ] , we obtain the estimate where .using the gaussian tail estimate we get which along with and yields the bound finally , by corollary [ corb ] , and hence by lemma [ lem : a2 ] , and consequently plugging the estimates , and into , we obtain the bound }\psi(y ) < \infty.\ ] ] by the monotone convergence , the latter implies }\psi(y ) } { 1-c\delta^{\gamma}-c \delta^{1/4}},\ ] ] and by continuity of , }{\mathbb{e}}_z \nu \le \sup_{z\in [ 0,{\varepsilon}]}\psi(z)\xrightarrow{{\varepsilon}\to 0}0,\ ] ] verifying . consider the function , whose domain we extend to the whole real line by continuity , setting for and for . the process , satisfies the decomposition , with replaced by . taking into account that for ] and , }{\mathbb{p}}_z\big(\nu_{\delta^\beta}>\nu_{1}\big ) \le c \inf_{z\in [ 0,{\varepsilon}]}\varphi(z)\xrightarrow{{\varepsilon}\to 0}0,\ ] ] which verifies . the condition originates in the estimate , which is plugged into .the principle difficulty is that the term can not be effectively controlled for greater values of as .for example , it is not clear how to bound the right hand side of already for and .in insurance , one is often interested in calculating the probability of ruin by a particular time . for the diffusion, this probability can be found explicitly and for , it has a particularly simple form : note that has an atom at when : cev model with , , , and .the corresponding exact value of the absorption probability is , title="fig : " ] + figure [ fig1 ] depicts the results of the monte carlo simulation , in which the probability of absorption has been estimated for particular values of the model parameters , using i.i.d trajectories , generated by the euler - maruyama algorithm and .the relative estimation errors are plotted versus ( in the scale ) , along with the confidence intervals , based on the clt approximation .the results appear to be practically adequate : for example , the accuracy of is obtained already with .the positive bias of the error is not surprising , since the earlier absorption is more probable for the larger threshold .the simulation results also indicate in favor of the convergence which remains a plausible conjecture .in this section we summarize some asymptotic estimates of the absorption times for the diffusion near the boundary , which are used in the proof of lemma [ lem3 ] . recall that are the solutions of the following problems respectively ( see e.g. ) : and where is the operator , given by .the solutions in the class of continuous functions on $ ] , which are twice differentiable on are given by the formulas : ,\ ] ] and where [ lem : a1 ] the function is smooth on , and the scale density is smooth on , and }s(x)<\infty.\ ] ] from we get and thus differentiating , we get and consequently [ lem : a2 ] the function is smooth on and and we have and , since is increasing and , for , it follows that and for now implies and hence for , further , differentiating we see that satisfies as well and hence which verifies the claim .[ lemb ] let be a sequence of random variables and be an integer valued random variable . then for constants and and an integer [ corb ] let be an i.i.d . sequence and be an integer valued random variable .then for any , and integer with a constant depending only on and .moreover , for any , the sum satisfies with a constant , depending only on .s. n. ethier and t. g. kurtz . .wiley series in probability and mathematical statistics : probability and mathematical statistics .john wiley & sons inc ., new york , 1986 . characterization and convergence .liptser and a. n. shiryayev . , volume 49 of _ mathematics and its applications ( soviet series)_. kluwer academic publishers group , dordrecht , 1989 . translated from the russian by k. dzjaparidze [ kacha dzhaparidze ] .
a standard convergence analysis of the simulation schemes for the hitting times of diffusions typically requires non - degeneracy of their coefficients on the boundary , which excludes the possibility of absorption . in this paper we consider the cev diffusion from the mathematical finance and show how a weakly consistent approximation for the absorption time can be constructed , using the euler - maruyama scheme .
planet formation is an active field of observational , experimental and theoretical astrophysical research . within the last two decades morethan 800 planets have been found orbiting around other stars , which proves that the formation of planets is not a process restricted to our own solar system .today we know that the kilometer - sized precursors of planets , the so - called _ planetesimals _ , form in protoplanetary disks ( ppds ) , which are collapsed molecular clouds of gas and dust around young stars .the formation of larger bodies starts with the coagulation of colliding ( sub-)micrometer - sized dust grains .however , due to the distance and the opacity of these disks , as well as due to the astronomical timescales involved ( thousands to millions of years ) , direct observation of the growth processes is not possible .therefore , theoretical investigations , backed up by experiments , are needed to understand the first phases of planet formation .the collisions within the initial population of grains are driven by the interaction between the dust and the gas of the ppd . for small particles ( with typical sizes of a few m ) , the collision velocities are so low that every collision leads to sticking between the dust particles . as typically the collision speed increases with increasing dust - aggregate size ,the outcome of these collisions is not easily predictable .previous laboratory work has shown that while at low velocities sticking occurs , higher velocities lead to bouncing of the aggregates or even result in their fragmentation .the two latter effects obviously constrain the growth of larger dust agglomerates and it is , thus , unclear how kilometer - sized _ planetesimals _ form .unified experimental and theoretical work to understand the growth of dust agglomerates .19 laboratory experiments on dust - aggregate collisions were reviewed and compiled into a detailed collision model that predicts the outcome of collisions between dust aggregates of all masses and collision velocities ( see figure [ f : ghanaplot ] for an example of the collision model ) .this model was used by as input for a monte carlo growth simulation , which describes the evolution of dust particles in protoplanetary disks .these numerical simulations have shown that the particle growth stops at sizes on the order of a few millimeters , since collisions of these and of larger particles lead to bouncing ( yellow region if figure [ f : ghanaplot ] ) .however , the result of the simulations strongly depends on the transitions between the different collision outcomes in the input model . extrapolated the transition between the different outcomes over a wide range of the parameter space , because of a lack of experimental data in these regions .therefore , several new experiments have been designed to investigate the regions of the parameter space that is important for the growth simulations .these studies particularly focus on the transition between sticking and bouncing . because this transition is predicted by the model to occur at collision velocities lower than 1 cm / s , it is not feasible to perform the experiments in the laboratory .additionally , a large number of collisions have to be observed to make a reliable conclusion about the transition regime .this can be performed either by conducting several short ( up to 9 s of microgravity ) experiments in a drop tower ( and ) or longer ( up to 180 s of microgravity ) experiments on a suborbital rocket .one of the new experimental investigations is the suborbital particle aggregation and collision experiment ( space ) that flew onboard the rexus 12 rocket in march 2012 .the aim of the space experiment was to study collisions within an ensemble of submillimeter - sized particles under microgravity conditions at velocities relevant to ppds .rexus stands for rocket experiments for university students .the rexus / bexus ( the associated atmospheric balloon ) program is a combined project shared by the german aerospace agency dlr and the swedish national space board ( snsb ) , launching two ballistic rockets per year .the launch site is at esrange near kiruna in northern sweden .the rexus rocket ( figure [ f : rexus ] ) is 5.6 m long and has a diameter of 356 mm .it can host 3 to 5 experiments depending on their dimensions and mass ; the maximum total payload mass is about 95 kg .the rocket is composed of an improved orion motor , a recovery module containing a parachute and a gps system , a service module supplying the experiments with power and data connections and the experiment modules , one of which is accommodated inside the nosecone and is ejectable . during a nominal rexus flight , the rocket motor has burnt out after about 26 s and is being separated from the payload at s , when the ballistic , i.e. , the reduced - gravity flight starts .the low - gravity phase lasts between 150 and 180 s , depending on payload mass ( figure [ f : flight_events ] ) .the rocket is spin - stabilized during launch and is being de - spun thereafter by releasing two masses on strings ( yo - yo ) after the motor burn - out and the nosecone experiment ejection .apogee is reached at around 90 km altitude about 150 s after liftoff. the payload touches ground again after a flight time of about 640 s , decelerated by the recovery module parachute .landing occurs inside a range of km with a nominal velocity of around 8 m / s . for flying onboard the rexus rocket, experiments have to be designed according to the rexus user manual .the experiment module available for integrating the space experiment was made of a 14 ( 355.6 mm ) diameter , 4 mm thick , 220 mm long aluminum cylinder and an aluminum bulkhead to accommodate most of the experimental components .the service module of the rocket provides power and communication ( telemetry and command ) as well as control wires . for each experiment, the available voltage lies between 24 and 36 v and the average electric current should be kept under 1 a , with peak currents of 3 a possible .telemetry and command can both be implemented via an rs-422 interface .in addition , the service module provides three control lines : the liftoff ( lo ) , start - of - experiment ( soe ) and start - of - data - storage ( sods ) signals can be implemented into the flight sequence of the rocket s onboard software and can be used by each experiment as needed .space used the lo and soe signals to switch on experiment components automatically during the flight .in addition to the requirements induced by interfacing the rocket itself , an experiment built for flying on rexus must also comply to the different launch and flight conditions : * temperature : the launch site being located in northern sweden , near the city of kiruna , outside temperatures can reach values down to -40 , while the experiment module s skin can heat up to + 70 during launch .experiment components have to be functional ( and hence tested ) over that range of temperatures .electronic components in particular , must reliably operate when switched on at low temperatures due to long waiting times of the rocket on the launch rail . *durability : before the launch actually takes place , several tests and countdown simulations are being performed with the hardware being outside on the launch rail .this means that experiment components have to be able to withstand several power cycles at launch site conditions without affecting the experiment run during the actual flight .* late access : once the rocket is assembled , no access to the experiment hardware is granted .if late access is required , an umbilical or hatches have to be included in the experiment design .* loads : a peak acceleration of about 20 g is reached during launch . on top of that, the rocket and rotates at 3 to 4 hz inducing corresponding centrifugal forces on the payload .vibrational loads also have to be considered and the hardware tested according to the requirements .the objective of the space experiment is to investigate the collision behavior of submillimeter - sized dust aggregates at low impact velocities .thus , the hardware is responsible for the generation of the required collisions among the dust aggregates and the software manages the recording of the high - speed video data of the dust - aggregate ensemble throughout the reduced - gravity phase . to achieve this , the experimental setup holds dust aggregates in evacuated glass containers that are back - lit by an led array and allows for recording the motion of these particles with a high - speed camera .the experimental set - up encompasses three particle containers allowing for some variation of particle properties . to ensure that the collision behavior of the particles is not influenced by the presence of gas, the particle containers are placed inside a vacuum chamber which is evacuated during the experiment run . as the residual gas drag andthe residual spin acceleration of the rocket might influence the free motion of the dust aggregates , a two - dimensional shaking mechanism was added , which is being powered by a motor and agitates the particle containers in a rotary fashion ( see figure [ f : intern ] ) .the particle containers consist of three glass boxes holding the dust aggregates throughout the experiment duration .the walls of these boxes are flat and made of borofloat 33 glass pieces except for the two internal walls , which are aluminum surfaces , to investigate the difference in the sticking behavior of the dust aggregates to glass and aluminum targets .the inner glass walls have been coated with a special anti - adhesive nano - particle layer provided by the fraunhofer institute for surface engineering and thin films in braunschweig .this coating consists of nanometer - sized tips intended to prevent the dust aggregates from sticking to the walls ( see figure [ f : coating ] ) .two of the dust - aggregate containers are equal - sized with dimensions of 11 mm and the third glass container is bigger with dimensions of 24 mm .the walls of the particle containers are not perfectly contacting one another , leaving slits narrow enough to allow for the evacuation of air while keeping the dust aggregates inside their cells . to be able to gather data on particle collisions free from outside influences ,the experiment was performed under vacuum conditions ( at pressures below 10 mbar ) .therefore , the particle containers were built into a vacuum chamber that was evacuated through a balzers evc 110 m electronic valve .the motor , shaking mechanism and led array were built into the vacuum chamber ( figures [ f : space ] and [ f : inside_chamber ] ) . to ensure the best vacuum conditions possible during flight , andbecause the pre - vacuum and turbomolecular pumps could not be integrated into the rocket ( for obvious weight and dimension reasons ) , the outside shell of the space module was outfitted with an umbilical allowing for external evacuation of the vacuum chamber and powering of the vacuum valve .the two pumps were assembled into a zarges box that could be attached to the launch rail of the rocket .this way , evacuating the chamber was possible until 120 s before launch ( figure [ f : umbilical ] ) . andpower cables.,scaledwidth=50.0% ]there are several rationales for shaking the particle containers during the experimental run of space .first , shaking provides for a uniform distribution of the dust aggregates in the glass cells .this is not only required because the launch accelerations agglomerate the dust on one side of the particle containers , but also compensates for disturbances of the weightlessness ( due to atmospheric drag and residual rocket rotation ) throughout the flight phase . during the flight with a rexus rocket , the achieved microgravity levelis affected by two factors .one is the drag produced by the residual atmosphere at the altitudes of the rocket s parabola .for rexus 12 , the apogee of the rocket s trajectory was at about 86 km . for the time between 65 and 275 s after liftoff, the camera recorded the motion and collision of dust aggregates under reduced gravity - conditions . during this period, rexus 12 was above 60 km in altitude and the residual atmospheric drag resulted in accelerations of less than 10 g in the direction of the roll rotational axis of the rocket .the second factor is the residual spin of the rocket after yo - yo de - spin .as the rocket is spin - stabilized during launch , two masses attached to strings are released after motor burn - out .the spin can not completely be eliminated and for rexus 12 the module was still rotating at about 11/s during the experimental run . as the particle containers were placed at about 40 mm from the rocket s roll rotation axis , the residual spin resulted in an outward radial acceleration of 1.45 g . hence ,if left freely floating , the dust particles would have the tendency to accumulate in one corner of the particle container under the combined effects of residual centrifugal forces and atmospheric drag . to avoid this behavior ,the experiment was outfitted with a shaking mechanism agitating the particles along two directions by applying a circular motion to the glass containers ( see figure [ f : intern ] ) .the agitation mechanism was realized with a motor and several cog - wheels under the frame of the containers ( figure [ f : intern ] and numbers 2 and 3 in figure [ f : inside_chamber ] ) .the shaking motion of the particle containers was not only a counter - measure to the residual accelerations , it moreover provided a way of adjusting the internal kinetic energy of the many - particle system . by choosing a specific shaking profile, the collision speeds between particles can be adjusted and thresholds for growth or disruption of dust aggregates can be investigated .for the space experiment onboard the rexus 12 flight , the shaking profile was divided in three distinctive sequences ( see figure [ f : motor_profile ] ) . *first cycle : after an initial shake - up of 10 s duration at full speed ( 100% of the motor s nominal voltage is applied ) to disintegrate clumps formed during launch , the motor voltage was reduced to half its nominal value for 10 s and ramped up to 100% again within 15 s. it should be mentioned that , due to the non - linearity of the motor drive , the rotation speed decreased less than 50% . *second cycle : after an initial shake - up of 5 s duration to disrupt agglomerates formed in the previous cycle , the voltage applied to the motor was reduced to 20% of its nominal value for a duration of 25 s. this corresponds to the flight phase around the apogee of the rocket s trajectory .the minimum acceleration transferred from the particle container walls to the aggregates during this phase is on the order of 10 g. the motor voltage was then ramped up to 125% of its nominal value within 15 s to observe the aggregates fragmenting over a large range of speeds . *third cycle : after an initial shake up of 5 s duration to disrupt agglomerates of the previous cycle , the motor input voltage was ramped down to 50% of its nominal value within 20 s to observe the agglomeration of particles over a large range of speeds .the voltage was then kept at 50% for 10 s and ramped up to 100% again within 20 s. * after the last cycle : the motor was kept running for 15 s. the particle containers are illuminated by an led array positioned behind the glass cells .the led array is composed of 86 blue leds , each 3 mm in diameter , distributed over 9 rows .blue leds were used because the camera sensitivity is best at this specific wavelength , compared to white light or other colors . to obtain a uniform back illumination over the entire field of view of the camera , the led array and the backside of the particle containersare covered with a sheet of diffusion paper .the particle collisions during the experimental run were recorded by an allied vision technologies prosilica ge680 high speed camera at a continuous rate of 170 frames per second and a resolution of 640 pixels .this camera was chosen as a compromise between the size and weight restrictions for rexus experiments and the imaging performance .the prosilica ge680 camera has no internal control nor recording abilities and must be commanded by an external computer . for this purpose , a combination of a toradex single - board computer robin z530 l and its interface board daisy pico - itx was built into the experiment .these components were chosen because the robin computer comes with an rtl 8111d ethernet controller capable of handling the jumbo packets provided by the prosilica camera at high frame rates .the daisy carrier board allows for the connection between the robin computer and the camera via an ethernet cable .linux ubuntu server 11.0 was used as an operating system for its ability to implement all the required interfaces while using only very little of the computer s memory space .the c++ software controlling the camera was responsible for configuring the camera , starting the frame acquisition and recording the streamed frames to an external compact flash card ( figure [ f : acquisition ] ) .the optical path between the particle containers and the high - speed camera had to be adapted to the module size of the rexus rocket .an additional mirror between the window of the vacuum chamber and the camera lens folded the optical path and allowed for fitting the experimental hardware into the required dimensions of the rexus rocket module ( figure [ f : optical_path ] ) . as the duration of a rexus flight is quite short ( with about three minutes reduced - gravity time ) , it was decided to run the experiment fully autonomously , using only the control wires of the rocket s service module instead of up - linking commands . to that purpose ,an electronics board was designed and manufactured that was responsible for executing the pre - defined experimental procedures along a timeline .an atmega32 micro - controller switched the experimental components on and off via solid state relays ( aqv 252 g ) .this was done either upon receiving a signal from the service module in flight configuration , or commands from the ground station in test configuration .the micro - controller also produced health and status telemetry while the experiment was powered .the onboard computer and electronics board are automatically switched on and kept in stand - by mode when the experiment is powered .the lo signal is used to enter the flight mode .upon reception of the soe signal , the micro - controller switches on the camera , the illumination and runs through the shaking profile of the motor as described above and in figure [ f : motor_profile ] .once the sequence has terminated , motor , illumination and camera are switched off .un - powering of the experiment is performed directly by the rocket s service module , ensuring that the recorded imaging and housekeeping data can not be overwritten or erased during descent and landing ( table [ t : timeline ] ) . at the same timethat the experiment timeline is being worked through , the micro - controller also switches on the pressure sensor and records its readings on its own eeprom memory . during a typical experimental run , the camera produces 170 frames per second , each one containing 640 pixels at 8 bit grayscale values .this amounts to a data rate of about 61 mb / s flowing from the camera to the onboard computer and being recorded to an external flash card . to achieve this high data rate, the camera is connected to the onboard computer via an ethernet cable and streams jumbo packets . on the recording side, a 600x transcend compact flash card is connected to the daisy / robin unit via a s - ata cable and the associated flashcard to s - ata adapter ( figure [ f : acquisition ] ). an image - recording sequence of about 210 s duration produces about 11 gb of data , which can be stored on the 16 gb compact flash card in packages of 800 mb each .the dust samples used in the space experiment onboard rexus 12 were composed of sio aggregates of two different kinds and size distributions .we used monodisperse , spherical sio particles as well as aggregates constituting of polydisperse , irregular sio grains ( figure [ f : dust ] ) .dust particles.,scaledwidth=47.5% ] the monodisperse sio aggregates were sieved into two different size categories with mean aggregate diameters of 180 m and and 370 m , respectively .polydisperse sio aggregates were sieved to a mean diameter of 370 m only .the larger particle container ( with a size of 24 mm ) was filled with the 180 - sized monodisperse aggregates , one of the smaller containers ( with a size of 11 mm ) with 370 - sized monodisperse aggregates , and the second small container with 370 - sized polydisperse aggregates , respectively ( figure [ f : container_filling ] ) .dust types and sizes in the space containers during the rexus 12 flight.,scaledwidth=47.5% ] on march 19 , 2012 , at 14:05 utc , rexus 12 was successfully launched with the space experiment aboard .after motor burn - out and separation at 26 s and 77 s , respectively , the rocket reached apogee at around 86 km altitude after 140 s into the flight .due to technical issues , however , the parachute did not deploy as expected and the payload impacted the ground with a much higher velocity than nominal .the rocket and its payloads were nonetheless recovered and the individual modules could be handed back to the experimenters .the space experiment was powered 600 s before liftoff , received the lo and soe signals at 0 and 65 s after launch , respectively , and ran its complete experimental sequence before being switched off by the rocket timeline at 330 s after liftoff ( table [ t : timeline ] ) .the whole time it was powered , space delivered nominal health and status telemetry indicating a nominal run through the internal experiment timeline . upon recovery , most of the experimental hardware was damaged , due to the hard landing , but the compact flash card , which had been safely built in below the onboard computer and was holding the scientific data , could be retrieved intact .the electronics board including the micro - controller could also be recovered functional , delivering the flight pressure data of the evacuated chamber ..the rexus 12 and space internal timeline . [ cols= " < , < , < " , ][ t : timeline ] the high - speed imaging data retrieved from the compact flash card turned out to be of the expected quality and can be used for the analysis of the collision behavior of the dust aggregates as intended . only a few discrepancies to a completely nominal experimental run occurred , which , however , do not compromise the scientific analysis of the space experiment : * the camera run - up lasted about three times longer than during ground tests ; instead of the usual 10 s from power - up to start of frame acquisition on ground , about 30 s were required during flight .the reason for this anomaly could not be determined .* some frames of the high - speed camera image sequences were lost during the first 10 s of the data recording .this behavior had already been observed during ground tests and had been taken into account in the experiment timeline by implementing a longer recording time . *the sticking efficiency of the submillimeter - sized dust aggregates and the anti - adhesive glass walls of containers was much higher than expected and tested before during drop - tower experiments . especially during slow shaking phases, dust aggregates tended to form clusters on the glass walls rather than colliding with one another while flying freely , as done in the almost perfect microgravity environment of the drop tower .these points lead to a reduction of the usable amount of data from the expected 11 gb to about 8 gb , which is still a substantial quantity of material for the purpose of the intended analysis ( see figure [ f : frames ] for an example ) .the full analysis of the data obtained by the space experiment onboard the rexus 12 flight is still ongoing and will be combined with the data sampled by the experiment during its hardware test in the bremen drop tower in august 2011 . the high - speed imaging data recorded during the experimental run of space comes in form of 8 bit grayscale bitmap frames ( see figure [ f : frames ] ) .these pictures are analyzed with self - written software using idl ( interactive data language ) .part of the basic image processing is for example the elimination of the rotational motion of the frames due to the circular shaking of the glass containers and the correction of remaining back - illumination irregularities .the real - time shaking frequency of the glass cells and the particle velocities can directly be determined by the known frame rate of the camera .as the many - particle systems in the space experiments are optically quite dense , the growth rate of aggregates can best be investigated by averaging a certain number of pictures before and after each frame .an example of this procedure is shown in figure [ f : boxcar ] . in individual cases ,the space hardware allows the direct observation of collisions between two dust aggregates , as shown in figure [ f : collision ] , recorded during a drop - tower test run .the dust aggregates can then be tracked along several frames and their collision velocity can be determined . by observing the growth rate of dust aggregates in the space particle containers during the rexus 12 flight , and by relating them to the shaking frequencies induced by the motor , a threshold velocity for sticking of submillimeter - sized dust aggregates can be determined . in the same way , an aggregate fragmentation limit can be investigated .in addition , the structure of the growing dust aggregates and its role in the agglomerate growth can also be determined .finally , the sticking or bouncing outcome of individual collisions between dust aggregates can provide data points to the dust collision model presented in section [ s : intro ] .the full results of the space data analysis will be the subject of a dedicated paper .if the space experiment were to be re - built , the first recommendation would be to either choose a camera displaying a higher level of autonomy ( i.e. internal memory ) or using an onboard computer with a bigger ram space .the robin z530 l single - board computer possesses a ram of 1 gb , which it uses as buffer for incoming frames from the camera while it writes them on the external flash card . for higher acquisition rates ,this buffer space is insufficient and the frame streaming starts experiencing severe data losses as the ram overloads .this could be made up for by allocating additional swap space on available hard disk memory .however , the maximal possible frame rate for the camera ( 205 fps ) could not be achieved and a compromise at an acceptable rate of 170 fps , at which no considerable data loss occurred , had to be consented to .the experience of the rexus 12 flight has shown that the dust aggregates used in the space experiment possess a very high sticking efficiency with the glass walls of the particle containers , even though these were actually coated with a nano - particle anti - adhesive layer .hence , the shaking frequency of the particle containers should be adjusted accordingly to keep the particles as much as possible free - floating in the inner container volume and therewith optimizing the quantity of usable data among the recorded frames . the acceleration level due to residual atmospheric drag and rocket spincan also clearly be observed in the experimental data by the accumulation of particles in a specific corner of the particle containers ( see figure [ f : frames ] ) .having now gathered experience on the strength and direction of this perturbation , the shaking profile of a future experiment run could also be adapted accordingly .we thank the rexus / bexus project of the deutsches zentrum fr luft- und raumfahrt ( dlr ) for the flight on the rexus 12 rocket and for their contribution towards hardware expenses .this work was supported by the icaps ( interaction in cosmic and atmospheric particle systems ) project of dlr ( grant 50wm0936 ) and a fellowship from the international max planck research school on physical processes in the solar system and beyond ( imprs ) .we also thank oliver werner from the fraunhofer institute for surface engineering and thin films of braunschweig for the anti - adhesive glass coating of the particle containers .7ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] * * , ( ) link:\doibase 10.1146/annurev.astro.46.060407.145152 [ * * , ( ) ] link:\doibase 10.1051/0004 - 6361/200912852 [ * * , ( ) ] , ( ) , ( ) link:\doibase 10.1016/j.icarus.2012.01.012 [ * * , ( ) ] , _ _ ( ) ,
the suborbital particle aggregation and collision experiment ( space ) is a novel approach to study the collision properties of submillimeter - sized , highly porous dust aggregates . the experiment was designed , built and carried out to increase our knowledge about the processes dominating the first phase of planet formation . during this phase , the growth of planetary precursors occurs by agglomeration of micrometer - sized dust grains into aggregates of at least millimeters to centimeters in size . however , the formation of larger bodies from the so - formed building blocks is not yet fully understood . recent numerical models on dust growth lack a particular support by experimental studies in the size range of submillimeters , because these particles are predicted to collide at very gentle relative velocities of below 1 cm / s that can only be achieved in a reduced - gravity environment . + the space experiment investigates the collision behavior of an ensemble of silicate - dust aggregates inside several evacuated glass containers which are being agitated by a shaker to induce the desired collisions at chosen velocities . the dust aggregates are being observed by a high - speed camera , allowing for the determination of the collision properties of the protoplanetary dust analog material . the data obtained from the suborbital flight with the rexus ( rocket experiments for university students ) 12 rocket will be directly implemented into a state - of - the - art dust growth and collision model .
given the extensive use of quantum entanglement as a resource for quantum information processing , the theory of entanglement , in particular , entanglement quantification , is a topic important to quantum information theory .however , apart from a limited number of cases like low dimension hilbert spaces and for pure states , the mathematical structure of entanglement is not yet fully understood .the entanglement properties of bipartite states have been widely explored ( see for a comprehensive review ) .this has been aided by the fact that bipartite states possess the nice mathematical property in the form of the schmidt decomposition , the schmidt coefficients encompassing all their non - local properties .no such simplifying structure is known in the case of larger systems .approaches using certain generalizations of schmidt decomposition and group theoretic or algebraic methods , have been taken in this direction .a number of methods for comparing or quantifying or qualifying entanglement have been proposed for bipartite systems and/or pure states such as entanglement of formation , entanglement cost , distillable entanglement , relative entropy of entanglement , negativity , concurrence and entanglement witnesses .however , these quantifications do not always lend themselves to being computed , except in some restricted situations . as such, a general formulation is still an open problem .it is known that state transformations under local operations and classical communication ( locc ) are very important to quantifying entanglement because locc can at the best increase only classical correlations .therefore a good measure of entanglement is expected not to increase under locc . a necessary and sufficient condition for the possibility of such transformations in the case of bipartite stateswas given by nielsen .an immediate consequence of his result was the existence of _ incomparable _ states ( the states that can not be obtained by locc from one another ) .bennett et al . , formalized the notions of reducibility , equivalence and incomparability to multi - partite states and gave a sufficient condition for incomparability based on _ partial _ entropic criteria . in this work ,our principal aim is not to quantify entanglement , but to develop graph theoretic techniques to analyze the comparability of maximally entangled multipartite states of several qubits distributed between a number of different parties .we obtain various qualitative results concerning reversibility of operations and comparability of states by observing the combinatorics of multiparitite entanglement . for our purpose , it is sufficient to consider the graph theoretic representation of various maximally entangled states ( represented by specific graphs built from epr , ghz and so on ) .although this might at first seem overly restrictive , we will in fact be able to demonstrate a number of new results .furthermore , being based only on the monotonicity principle , it can be adapted to any specific quantification of entanglement .therefore , our approach is quite generic , in principle applicable to all entanglement measures . since the entanglement of maximally entangled states is usually represented by integer values , it turns out that we can analyze entangled systems simply by studying the combinatorial properties of graphs and set systems representing the states .the basic definitions and concepts are introduced through the framework set in section [ framework ] .we introduce a technique called _ bicolored merging _ in section [ bicoloredmerging ] , which is essentially a combinatorial way of quantifying maximal entanglment between two parts of the system , and inferring transformation properties to be satisfied by the states .in section [ skpsel ] , we present our first result : the impossibility of obtaining two einstein - podolsky - rosen ( epr ) pairs among three players starting from a greenberger - horne - zeilinger ( ghz ) state ( theorem [ skpghzreverse ] ) .we then show that this can be used to establish the impossibility of implementing a two - pronged teleportation ( called _ selective teleportation _ ) given pre - shared entanglement in the form of a ghz state .we then demonstrate various classes of incomparable multi - partite states in section [ classmulentan ] .finally , we discuss the minimum number of copies of a state required to prepare another state by locc and present bounds on this number in terms of the _ quantum distance _ between the two states in section [ quantumdistance ] .we believe that our combinatorial approach vastly simplifies the study of entanglement in very complex systems .moreover , it opens up the road for further analysis , for example , to interpret entanglement topologically . in future works, we intend to apply and extend these insights to non - maximal and mixed multipartite states , and to combine our approach with a suitable measure of entanglement .in this section we introduce a number of basic concepts useful to describe combinatorics of entanglement .first , an _ epr graph _ is a graph whose vertices are the players ( ) and edges ( ) represent shared entanglement in the form of an epr pair .formally : epr graph : for agents an undirected graph is constructed as follows : , and .the graph thus formed is called the epr graph of the agents .a spanning tree is a graph which connects all vertices without forming cycles ( i.e. , loops ) . accordingly : spanning epr tree : a _ spanning tree _ is a connected , undirected graph linking all vertices without forming cycles .an epr graph is called a _ spanning epr tree _ if the undirected graph is a spanning tree .the above notions are generalized to more general multipartite entanglement by means of the concept of a _ hypergraph_. a usual graphis built up from edges , where a normal edge links precisely two vertices .a hyperedge is a generalization that links vertices , where .a graph endowed with at least one hyperedge is called a hypergraph . from the combinatorial viewpoint, a simple and interesting connection can be made between entanglement and hyperedges : an -cat state ( also sometimes called an -ghz state ) corresponds to a hyperedge of size . in particular, an epr state corresponds to a simple edge connecting only two vertices .formally : entangled hypergraph : let be the set of agents and , where and is such that its elements ( agents ) are in -cat state .the hypergraph ( set system ) is called an entangled hypergraph of the agents .a graph is connected if there is a path ( having a length of one or more edges ) between any two vertices . accordingly : connected entangled hypergraph : a sequence of hyperedges , , ... , in a hypergraph is called a _ hyperpath _ ( path ) from a vertex to a vertex if 1 . and have a common vertex for all , 2 . and are agents in , 3 . , and 4 . .if there is a hyperpath between every pair of vertices of in the hypergraph , we say that is connected .analogous to a spanning epr tree we have : entangled hypertree : a connected entangled hypergraph is called an entangled hypertree if it contains no cycles , that is , there do not exist any pair of vertices from such that there are two distinct paths between them .further : -uniform entangled hypertree : an entangled hypertree is called an -uniform entangled hypertree if all of its hyperedges are of size for . in ordinary graphs , a vertex that terminates , i.e. , has precisely a single edge linked to it is called a terminal or pendent vertex .this concept is extended to the case of hypergraphs : pendant vertex : a vertex of a hypergraph such that it belongs to only one hyperedge of is called a pendant vertex in .vertices which belong to more than one hyperedge of are called non - pendant . in the paperwe use polygons for pictorially representing an entangled hypergraph of multipartite states .( there should be no confusion with a closed loop of epr pairs because we consider only tree structured states ) .a hyperedge representing an -cat amongst the parties is pictorially represented by an -gon with vertices distinctly numbered by .we write these vertices corresponding to the vertices of the -gon in the pictorial representation in arbitrary order .this only means that out of qubits of the -cat , one qubit is with each of the parties .a result we will require frequently is that there exist teleportation protocols to produce -partite entanglement starting from pairwise entanglement shared along any spanning tree connecting the parties .that is , there exist locc protocols to turn a -party spanning epr tree into an -regular hypergraph consisting of a single hyperedge of size .the protocol is detailed in ref . , but the basic idea is readily described .it is essentially a scheme to deterministically create a maximally entangled -cat state from epr pairs shared along a spanning tree .briefly , the protocol consists in teleporting entanglement along a spanning tree .players not on terminal vertices along the tree execute the following subroutine. suppose player alice shares an -cat with preceding players along the tree and wishes to create an -cat state including bob , the next player down the tree .first she entangles an auxiliary particle with her particle in the -cat state by means of local operation .she then uses her epr pair shared with bob to teleport the state of the auxiliary particle to bob .the players , including alice and bob , now share an -cat state , as desired .another result we will require in some of our proofs , given as the theorem below , is that the spanning epr tree mentioned above is also a necessary condition to prepare an -cat state starting from shared epr pairs .[ skpnec ] given a communication network of agents with only epr pairs permitted for pairwise entanglement between agents , a necessary condition for creation of a -cat state is that the epr graph of the agents must be connected .proof of the theorem is given in appendix 1 using our method of bicolored merging developed in section iii .monotonicity is easily the most natural characteristic that ought to be satisfied by all entanglement measures .it requires that any appropriate measure of entanglement must not change under local unitary operations and more generally , the expected entanglement must not increase under locc .we should note here that in locc , lo involves unitary transformations , additions of ancillas ( that is , enlarging the hilbert space ) , measurements , and throwing away parts of the system , each of these actions performed by one party on his or her subsystem .cc between the parties allows local actions by one party to be conditioned on the outcomes of the earlier measurements performed by the other parties .apart from monotonicity , there are certain other characteristics required to be satisfied by entanglement measures .however , monotonicity itself vastly restricts the choice of entanglement measures ( for example , marginal entropy as a measure of entanglement for bipartite pure states or entanglement of formation for mixed states ) . in the present work, we find that monotonicity , where proven for a particular entanglement measure candidate , restricts a large number of state transformations and gives rise to several classes of incomparable ( multi - partite ) states .so , in order to study the possible state transformations of ( multi - partite ) states under locc , it would be interesting to look at the kind of state transforms under locc which monotonicity does not allow .we can observe that monotonicity does not allow the preparation of or more epr pairs between two parties starting from only epr pairs between them .in particular , it is not possible to prepare two or more epr pairs between two parties starting only with a single epr pair and only locc .this is an example of impossible state transformation in bipartite case as dictated by the monotonicity postulate .we anticipate that a large class of multi - partite states could also be shown to be incomparable by using impossibility results for the bipartite case through suitable reductions .for instance , consider transforming ( under locc ) the state represented by a spanning epr tree , say , to that of the state represented by another spanning epr tree , say ( see figure [ figure30a ] ) .this transformation can be shown to be impossible by reducing to the bipartite case as follows : we assume for the sake of contradiction that there exists a protocol which can perform the required transformation .it is easy to see that the protocol is also applicable in the case when a party possesses all the qubits of parties and and another party possesses all the qubits of the parties and .this means that party is playing the role of parties and and is playing the role of parties and . clearly , any locc actions done within group ( ) is a subset of lo available to ( ) and any cc done between one party from and the other from is managed by cc between and .therefore , starting only with one edge ( ) they eventually construct just by lo ( by local creation of epr pairs representing the edges and ( by and by ) .they then apply protocol to obtain with the edges and .( refer to the figure [ figure30c ] ) .all edges except and are local epr pairs ( that is , both qubits are with the same party , or ) . now the parties and share two epr pairs in the form of the edges and , even though they started sharing only one epr pair . butthis is in contradiction with monotonicity : that expected entanglement should not increase under locc .hence , we can conclude that such a protocol can not exist !the approach we took in the above example could also be motivated from the marginal entropic criterion ( noting that this criterion in essence is also a direct implication of monotonicity ) .as clear from the above example , the above scheme aims to create a bipartition among the players in such a way that the marginal entropy of each partition is different for the two states . in many cases , this difference will simply correspond to different number of epr pairs shared between the two partitions .given two multi - partite states , the relevant question is : is there a bipartition such that the marginal entropy for the two states is different ? " .if yes , then the state ( configuration of entanglement ) corresponding to the higher entropy can not be obtained from that to lower entropy by means of locc .it is convenient to imagine the two partitions being ` colored ' distinctly to identify the partitions which they make up . in general ,suppose we want to show that the multi - partite state can not be converted to the multi - partite state by locc .this can be done by showing an assignment of the qubits ( of all parties ) to only two parties such that can be obtained from ( ) epr pairs between the two parties by locc while can be converted to more than epr pairs between the two parties by locc .this is equivalent to saying that each party is given either of two colors ( say or ) .finally all qubits with parties colored with color are assigned to the first party ( say ) and that with parties colored with second color to the second party ( say ) .this coloring is done in such a way that the state can be obtained by locc from less number of epr pairs between and than that can be obtained from by locc . local preparation( or throwing away ) of epr pairs is what we call merging in combinatorial sense .keeping this idea in mind , we now formally introduce the idea of bicolored merging for such reductions in the case of the multi - partite states represented by epr graphs and entangled hypergraphs .suppose that there are two epr graphs and on the same vertex set ( this means that these two multi - partite states are shared amongst the same set of parties ) and we want to show the impossibility of transforming to under locc , then this is reduced to a bipartite locc transformation which violates monotonicity , as follows : 1 .bicoloring : assign either of the two colors or to every vertex , that is , each element of .2 . merging : for each element of , merge the two vertices and if and only if they have been assigned the same color during the bicoloring stage and assign the same color to the merged vertex .call this graph obtained from as bcm ( bicolored - merged ) epr graph of and denote it by .similarily , obtain the bcm epr graph of .the bicoloring and merging is done in such a way that the graph has more number of edges than that of .4 . give all the qubits possessed by the vertices with color to the first party ( say , party ) and all the qubits possessed by the vertices with color to the second party ( say , party ) . combining this with the previous steps, it is ensured that in the bipartite reduction of the multi - partite state represented by , the two parties and share more number of epr pairs ( say , state ) than that for ( say , state ) .we denote this reduction as .now if there exits a protocol which can transform to by locc , then can also transform to just by locc as follows : ( ) will play the role of all vertices in which were colored as ( ) .the edges which were removed due to merging can easily be cretated by local operations ( local preparation of epr pairs ) by the party ( ) if the color of the merged end - vertices of the edge was assigned color ( ) .this means that starting from and only lo , can be created .this graph is virtually amongst parties even though there are only two parties .the protocol then , can be applied to to obtain by locc .subsequently can be obtained by the necessary merging of vertices by lo , that is by throwing away the local epr pair represented by the edges between the vertices being merged .since the preparation of from by locc violates the monotonocity postulate , such a protocol can not exist !an example of bicolored merging for epr graphs has been illustrated in figure [ figure31 ] .the bicolored merging in the case of entangled hypergraphs is essentially the same as that for epr graphs . for the sake of completeness , we present it here .suppose there are two entangled hypergraphs and on the same vertex set ( that is , the two multi - partite states are shared amongst the same set of parties ) and we want to show the impossibility of transforming to under locc .transformation of to can be reduced to a bipartite locc transformation which violates monotonicity thus proving the impossibility .the reduction is done as follows : 1 .bicoloring : assign either of the two colors or to every vertex , that is , each element of .merging : for each element of , merge all vertices with color to one vertex and those with color to another vertex and give them colors and respectively .this merging collapses each hyperedge to either a simple edge or a vertex and thus the hypergraph reduces to a simple graph with vertices assigned with either of the two colors or .call this graph obtained from as bcm epr graph of and denote it by .similarily obtain the bcm epr graph of .the bicoloring and merging is done in such a way that the graph has more number of edges than that of .4 . give all the qubits possessed by the vertices with color to the party one ( say party ) and all the qubits possessed by the vertices with color to the second party ( say party ) . we denote the above reduction as .the rest of the discussion is similar to that for the case of epr graphs given before . in the figure [ figure32 ] , we demostrate the bicolored merging of entangled hypergraphs .note that the two entangled hypergraphs and are locc comparable only if one of and is not true .equivalently , if both of and hold , then the entangled hypergraphs and are incomparable .it is also interesting to note at this point that locc incomparability shown by using the method of bicolored merging is in fact _strong incomparability _ as defined in . we would also like to stress that any kind of reduction ( in particular , various possible extensions of bicolored merging ) which leads to the violation of _ any _ of the properties of a potential entanglement measure , is pertinent to show the impossibility of many multi - partite state transformations under locc .since the bipartite case has been extensively studied , such reductions can potentially provide many ideas about multi - partite case by just exploiting the results from the bipartite case .in particular , the definitions of epr graphs and entangled hypergraphs could also be suitably extended to capture more types of multi - partite pure states and even mixed states and a generalization of the idea of bicolored merging as a suitable reduction for this case could also be worked out .we know that a ghz state amongst three agents , and can be prepared from epr pairs shared between any two pairs of the three agents using only locc .we consider the problem of _ reversing _ this operation , that is , whether it is possible to construct two epr pairs between any two pairs of the three agents from a ghz state amongst the three agents , using only locc . by using the method of bicolored merging, we answer this question in the negative by establishing the following theorem .[ skpghzreverse ] starting from a ghz state shared amongst three parties in a communication network , two epr pairs can not be created between any two sets of two parties using only locc .suppose there exists a protocol for _ reversing _ a ghz state into two epr pairs using only locc .in particular , suppose protocol starts with a ghz state amongst the agents , and , and prepares epr pairs between any two pairs of , and ( say , , and , corresponding to configuration as shown in figure [ figure33 ] ) .since we can prepare the ghz state from epr pairs between any two pairs of the three agents , we can prepare the ghz state starting from epr pairs between and , and and .once the ghz state is prepared , we can apply protocol to construct epr pairs between and and between and using only locc ( i.e. , configuration ) .so , we can use only locc to convert a configuration where epr pairs exist between and and between and , to a configuration where epr pairs are shared between and and between and .the possibility of means that the marginal entropy of can be increased using only locc , which is known to be impossible . the same resultcould also be achieved by similar bicolored merging directly applied on the ghz state and any of or but we prefer the above proof for stressing the argument on the symmetry of and with respect to the ghz . moreover, this proof gives an intuition about possibility of incomparability amongst spanning epr trees as and are two distinct spanning epr trees on three vertices .we prove this general result in the theorem [ twoeprtrees ] .the above theorem motivates us to propose some kind of comparison between a ghz state and two pairs of epr pairs in terms of the non - local correlations they possess . in this sense , therefore , a ghz state may be viewed as less than two epr pairs .it is easy to see that an epr pair between any two parties can be obtained starting only from a ghz state shared amongst the three parties and locc .the third party will just do a measurement in the diagonal basis and send the result to other two . by applying the corresponding suitable operations they get the required epr pair .> from theorem [ skpnec ] , we observe that a single epr pair , between any two of the three parties , is not sufficient for preparing a ghz state amongst the three parties using only locc .these arguments can be summarised in the following theorem .[ eprghz ] 1-epr pair a ghz state 2-epr pairs an interesting problem in quantum information theory is that of _ selective teleportation _ . given three agents , and , and two qubits of unknown quantum states and with ,the problem is to send to and to selectively , using only locc and apriori entanglement between the three agents .a simple solution to this problem is applying standard teleportation , in the case where shares epr pairs with both and .an interesting question is whether any other form of apriori entanglement can help achieving selective teleportation . in particular , is it possible to perform selective teleportation where the apriori entanglement is in the form of a ghz state amongst the three agents ? the following theorem answers this question using the result of the theorem [ skpghzreverse ] .[ seltel ] with apriori entanglement given in the form of a ghz state shared amongst three agents , two qubits can not be selectively teleported by ane of the three parties to the other two parties .suppose there exists a protocol which can enable one of the three parties ( say ) to teleport two qubits and selectively to the other two parties ( say and ) .now takes four qubits ; she prepares two epr pairs one from the first and second qubits and the other from the third and fourth qubits .he then teleports the first and third qubits selectively to and using ( consider first qubit as and the third qubit as ) .we can note here that in this way is able to share one epr pair each with and .but this is impossible because it allows to prepare two epr pairs starting from a ghz state and only using locc .this contradicts theorem [ skpghzreverse ] .hence follows the result .an immediate result comparing an -cat state with epr pairs follows from noting that , given a spanning epr tree among parties , an -cat state can be constructed using only locc using the teleportation protocol described in section [ framework ] .the result we present below generalizes theorem [ eprghz ] . [ catepr ] -epr -cat -spanning epr tree .we can argue in a similar manner that an -cat state amongst -parties can not be converted by just using locc to any form of entanglement structure which possesses epr pairs between any two or more different sets of two parties .assume this is possible for the sake of contradiction .then the two edges could be in either of the two forms : ( 1 ) and and ( 2 ) and , where are all distinct . in bicolored - mergingassign the colors as follows . in case( 1 ) , give color to and and give the color to the rest of the vertices . in case( 2 ) , give color to and color to the rest of the vertices . since both the cases are contrary to our assumption , the assertion follows .moreover , from theorem [ skpnec ] ( see appendix 1 for proof ) , no disconnected epr graph would be able to yield -cat just by locc .these two observations combined together lead to the following theorem which signifies the fact that these two multi - partite states can not be compared . [ catgraph ] a cat state amongst agents in a communication network is locc incomparable to any disconnected epr graph associated with the agents having more than one edge .the above result indicates that there are many possible forms of entanglement structures ( multi - partite states ) which can not be compared at all in terms of non - local correlations they may have .this simple result is just an implication of the necessary combinatorics required for the preparation of cat states .one more interesting question with respect to this combinatorics is to compare a spanning epr tree and a cat state .a spanning epr tree is combinatorially sufficient for preparing the cat state and thus seems to entail more non - local correlations than in a cat state .the question whether this ordering is strict needs to be further investigated .it is easy to see that an epr pair between any two parites can be obtained starting from a cat state shared amongst the agents just by locc ( theorem [ catepr ] ) .therefore , given copies of the cat state we can build all the edges of any spanning epr tree just by locc .but whether this is the lower bound on the number of copies of -cat required to obtain an spanning epr tree is even more interesting .the following theorem shows that this is indeed the lower bound .[ treecat ] starting with only copies of -cat state shared amongst its agents , no spanning epr tree of the agents can be obtained just by locc .suppose it is possible to create a spanning epr tree from copies of -cat states .as we know , an -cat state can be prepared from any spanning epr tree by locc .thus , if copies of -cat can be converted to , then copies of any spanning epr tree can be converted to just by locc . in particular , copies of a chain epr graph ( which is clearly a spanning epr tree , figure [ figure36 ] ) can be converted to just by locc .now , we know that any tree is a bipartite connected graph with edges across the two parts .let vertices be the members of the first group and the rest be in the other group . construct a chain epr graph where the first vertices are in the sequence , and the rest of the vertices are from the other group in the sequence ( figure [ figure36 ] ) . in bicolored merging, we give the color to the parties and the rest of the parties are given the color .this way we are able to create epr pairs ( note that there are edges in across the two groups ) between and starting from only epr pairs ( considering the chain - like spanning epr trees ) .so , we conclude that copies of -cat can not be converted to any spanning epr tree just by locc. see figure [ figure36 ] for illustration of the required bicolored merging . the proof could also be acheived by similar kind of bicolored merging directly applied on -cat and . in the preceding results we have compared spanning epr trees with cat states .we discuss the comparability / incomparability of two distinct spanning epr trees in the next theorem and corollary .[ twoeprtrees ] any two distinct spanning epr trees are locc - incomparable . _proof : _ let and be the two distinct spanning epr trees on same vertices . clearly , there exist two vertices ( say and ) which are connected by an edge in but not in . also by virtue of connectedness of spanning trees , there will be a path between and in .let this path be with ( see figure [ figure37 ] ) .since , must exist .let subtree in rooted at except for the branch which contains the edge , subtree in rooted at except for the branch which contains the edge , subtree in rooted at except for the branches which contain either of the edges and ( ) , subtree in rooted at except for the branch which contains the edge , and subtree in rooted at except for the branch which contains the edge .it is easy to see that the set is nonempty as and , being distinct , must contain more than two vertices . also and must be disjoint ; for , otherwise there will be a path between and in which does not contain the edge .thus there will be two paths between and in contradicting the fact that is a spanning epr tree ( figure [ figure37 ] ) . with these two charactistics of and ,it is clear that will lie either in or in . without loss of generality ,let us assume that .now we do bicolored merging where the color is assigned to and all vertices in and the color is assigned to the rest of the vertices ( refer to figure [ figure37 ] for illustration ) . since and were choosen arbitrarily , the same arguments also imply that there can not exist a method which converts to .this leads to the conclusion that any two distinct spanning epr trees are locc incomparable .[ treenum ] there are at least exponentially many locc - incomparable classes of pure multi - partite entangled states ._ proof : _ we know from results in graph theory that on a labelled graph on vertices , there are posible distinct spanning trees .hence there are distinct spanning epr trees in a network of agents . > from theorem [ twoeprtrees ] all these spanning epr treesare locc incomparable .it can be noted here that the most general local operation of qubits is an element of the group ( local unitary rotations on each qubit alone ) .so , if two states are found incomparable , this means that there are actually two incomparable equivalence classes of states ( where members in a class are related by a transformation ) .thus we have at least exponentially many locc - incomparable classes of multi - partite entangled states .since entangled hypergraphs represent more general entanglement structures than those represented by the epr graphs ( in particular spanning epr trees are nothing but 2-uniform entangled hypertrees ) , it is likely that there will be even more classes of incomparable multi - partite states and this motivates us to generalize theorem [ twoeprtrees ] for entangled hypertrees . however , remarkably this intuition does not work directly and there are entangled hypertrees which are not incomparable .but there are a large number of entangled hypertrees which do not fall under any such partial ordering and thus remain incomparable . to this endwe present our first imcomparability result on entangled hypergraphs .[ pendhypincomp ] let and be two entangled hypertrees .let and be the set of pendant vertices of and respectively . if the sets and are both nonempty then the multi - partite states represented by and are necessarily locc - incomparable .proof : using bicolored merging we first show that can not be converted to under locc. impossibility of the reverse conversion will also be immediate .since is nonempty , there exists such that .that is , is pendant in but non - pendant in ( figure [ figure30d ] ) . in the bicoloredmerging assign the color to the vertex and the color to all other vertices .this reduces to a single epr pair shared between the two parties and whereas reduces to at least two epr pairs shared between and .the complete bicolored merging is shown in figure [ figure30d ] .we note that this proof does not utilize the fact that and are entangled hypertrees , and thus the theorem is indeed true even for entangled hypergraphs satisfying the conditions specified on the set of pendant vertices .the conditions specified on the set of pendant vertices in theorem [ pendhypincomp ] cover a very small fraction of the entangled hypergraphs .however , these conditions are not necessary and it may be possible to find further characterizations of incomparable classes of entangled hypergraphs .we present two examples where the conditions of theorem [ pendhypincomp ] are not satisfied ._ example-1_:(figures [ figure38 ] and [ figure39 ] ) but either or . _example-2 _ : ( figure [ figure310 ] ) in the first example , the entangled hypergraphs and staisfy and . and are comparable in figure [ figure38 ] but incomparable in figure [ figure39 ] . in figure[ figure39 ] , the incomparability has been proved by showing that is not convertible to under locc because the impossibility of reverse conversion follows from the proof of theorem [ pendhypincomp ] ( ) .figure [ figure310 ] gives examples of comparable and incomparable entangled hypergraphs with condition .theorem [ twoeprtrees ] shows that two distinct epr spanning trees are locc incomparable and the spanning epr trees are nothing but -uniform entangled hypertrees .therefore , a natural generalization of this theorem would be to -uniform entangled hypertrees for any .as we show below , the generalization indeed holds . it should be noted that theorem [ pendhypincomp ] does not necessarily capture such entanglement structures ( multi - partite states ) ( figure [ figure311 ] )however , in order to prove that two distinct -uniform entangled hypertrees are locc incomparable , we need the following important result about -uniform hypertrees. see appendix 2 for the proof .[ lemmaruht ] given two distinct -uniform hypertrees and with , there exist vertices such that and belong to same hyperedge in but necessarily to different hyperedges in .now we state one of our main results on locc incomparability of multi - partite entangled states in the following theorem .[ hyptree ] any two distinct -uniform entangled hypertrees are locc - incomparable .proof : let and be the two -uniform entangled hypertrees . if then and happen to be two distinct spanning epr trees and the proof follows from the theorem [ twoeprtrees ] .therefore , let .now from theorem [ lemmaruht ] , there exist such that and belong to the same hyperedge in but necessarily to different hyperedges in .let the same hyperedge in be .also , since , being hypertree , is _ connected _ , there exists a path between and in .let this path be .clearly because and necessarily do not belong to the same hyperedge in .we introduce the following notations ( figure [ figure312 ] ) . sub - hypertree rooted at in except the branch that contains . sub - hypertree rooted at in except the branch that contains . sub - hypertree rooted at in except branches containing and . collection of all sub - hypertrees in rooted at some vertices in other than and ( where and ) except for the branches which contain . set of all vertices from which are not contained in . sub - hypertree rooted at in except the branch that contains . sub - hypertree rooted at in except the branch that contains . collection of all sub - hypertrees in rooted at some vertices in except for the branches which contain .in order to complete the proof we consider the following cases : _ case _ : such that without loss of generality let us take . now since , exactly one of , , or for some . accordingly there will be three subcases ._ case _ : for some ( take such minimum ) .do bicolored merging where the vertex along with all the vertices in are given the color and the rest of the vertices are given the color . _ case : for some . do the bicolored merging while assigning the colors as in the above case ._ case _ : for some . bicolored merging in this case is also same as in case ._ case _ : there does not exist any such that . clearly , and .note that whenever we are talking of set relations like union , containment etc ., we are considering the trees , edges etc . as sets of appropriate vertices from which make them .first we establish the following claim . _ claim _ : such that .we have .therefore , both and exist and since is -uniform .also is empty , for , otherwise there will be a cycle in which is not possible as is a hypertree .therefore , .also implies that .it is clear that .therefore , since .also , and so by pigeonhole principle , and .hence our claim is true .now we have such that . since , by the definition of it is clear that there must exist such that , the sub - hypertree in rooted at except for the branch containing . depending on whether or , we break this case into several subcases and futher in sub - subcases depending on the part in where lies ._ case _ : ( figure [ figure313 ] ) ._ case _ : .do the bicolored merging where and the vertices in are assigned the color and the rest of the vertices from are given the color ._ case _ : .bicolored merging is done where as well as all the vertices in are assigned the color and rest of the vertices from are given the color ._ case _ : . here in this case , depending on whether is in or not , there can be two cases ._ case _ : .bicolored merging is done where all the vertices in are given the color and rest of the vertices are assigned the color ._ case _ : . implies that either for some , or , where for some and . for both of these possibilities ,bicolored merging is the same and is done as follows : assign the color to as well as all vertices in and assign the color to rest of the vertices ._ case _ : ( figure [ figure314 ] ) ._ case _ : .do the bicolored merging where all the vertices in including are given the color and rest of the vertices are assigned the color ._ case _ : . in bicolored merginggive the color to all the vertices ( including ) in and color to the rest of the vertices ._ case _ : .in this case depending on whether , or the bicolored merging will be different ._ case _ : .bicolored merging is done where all the vertices in are given the color and rest of the vertices are assigned the color ._ case _ : . implies that either , or for some . in any casedo the bicolored merging where the color is assigned to all the vertices in and rest of the vertices are assigned the color . now that we have exhausted all possible cases and shown by the method of bicolored merging that the -uniform entangled hypertree can not be locc converted to the -uniform entangled hypertree .the same arguments also work for showing that can not be locc converted to by interchanging the roles of and . hence the theorem follows . before ending our section on locc incomparability of multi - partite states represented by epr graphs and entangled hypergraphs , we note that partial entropic criteria of bennett et .al which gives a sufficient condition for locc incomparability of multi - partite states , does not capture the locc - incomparability of spanning epr trees or entangled hypertrees in general .consider two spanning epr trees and on three vertices ( say ) . is such that the vertex pairs and are forming the two edges where as in the vertex pairs and are forming the two edges .it is easy to see that and are not marginally isentropic .in the proof of theorem [ twoeprtrees ] , we have utilized the fact that there exist at least two vertices which are connected by an edge in but not in .this follows as and are different and they also have equal number of edges ( namely , if there are vertices ) .in fact , in general there may exist several such pairs of vertices depending on the structures of and .fortunately , the number of such pair of vertices has some nice features giving rise to a metric on the set of spanning ( epr ) trees with fixed vertex set and thus giving a concept of distance .the distance between any two spanning ( epr ) trees and denoted by on the same vertex set is defined as the number of edges in which are not in .let us call this distance to be the _ quantum distance _ between and .we have proved in theorem [ twoeprtrees ] that obtaining from is not possible just through locc , so we need to do quantum communication .the minimum number of qubit required to be communicated for this purpose should be an interesting parameter related to state transformations amongst multi - partite states represented by spanning epr trees ; let us denote this number by .we note that .this is because each edge not present in can be created by only one qubit communication .the exact value of will depend on the structures of and and , as we can note , on the number of edge disjoint paths in between the vertex pairs which form an edge in but not .we can say more about quantum distance . recall theorem [ treecat ] where we show that a lower bound on the number of copies of -cat to prepare a spanning epr tree by locc , is .can we obtain a similar lower bound in the case of two spanning epr trees and relate it to the quantum distance ?the answer is indeed yes .let denote the minimum number of copies of the spanning epr tree required to obtain just by locc .we claim that , .the lower bound follows from theorem [ twoeprtrees ] .the upperbound is also true because of the following reason . is the number of ( epr pairs ) edges present in but not in . for each such edge in ( let be the vertices forming the edge ) , while converting many copies of to by locc an edge between and must be created .since is a spanning tree and therefore connected , there must be a path between and in and this path can be well converted ( using entanglement swapping ) to an edge between them ( i.e. epr pair between them ) only using locc .hence one copy each will suffice to create each such edges in .thus copies of will be sufficient to create all such edges in .one more copy will supply all the edges common in and .even more interesting point is that both these bounds are saturated .this means to say that there do exist spanning epr trees satifying these bounds ( figure [ figure315 ] ) . c. h. bennett , g. brassard , c. crepeau , r. jozsa , a. peres , and w. k. wootters , _ teleporting an unknown quantum state via dual classical and einstein - podolsky - rosen channels _ , phys .lett . 70 , 1895 ( 1993 ) .c. h. bennett , d. p. divincenzo , j. a. smolin , and w. k. wootters , phys .rev . a 54 , 3824 ( 1996 ) ; eprint quant - ph/9604024 ( 1996 ) .g. vidal , w. dr , j. i. cirac , phys .89 , 027901 ( 2002 ) ; eprint quant - ph/0112131 .e. m. rains , phys .a 60 , 173 ( 1999 ) ; erratum : phys .a 63 , 173 ( 1999 ) .l. henderson , v. vedral .phys rev lett .84 , 2263 ( 2000 ) .g. vidal and r.f .werner , quant - ph/0102117 .w. k. wootters , phys .80 , 2245 ( 1998 ) .m. horodecki , p. horodecki , and r. horodecki , phys .lett . a 223 , 1 ( 1996 ) .m. a. nielsen and i. l. chuang , _ quantum computation and quantum information _ , cambridge university press ( 2002 ) .m. a. nielsen , _conditions for a class of entanglement transformations _ , phys .83 , 436 ( 1999 ) .* proof of theorem [ skpnec ] * : + we use the method of bicolored merging to prove the fact that any disconnected epr graph on -vertices can not be converted to an -cat state on those vertices under locc .we first note that the bcm epr graph of an -cat state , irrespective of the bicoloring done , is always a graph which contains exactly one edge .now as is disconnected it will have more than one connected components .let these components be , where .the bicoloring is done as follows : assign the color to all the vertices in the component and the color to all other vertices i.e. all vertices in . after merging, therefore , reduces to a disconnected graph with no edges i.e. the bcm epr graph of is a graph with isolated vertices and no edges .now if we are able to prepare an -cat state from just using locc , we could also prepare an epr pair between two parties who were never sharing an epr pair just using locc .this violates monotonicity and hence the theorem is proved .proof of the claim : we first show that on the same vertex set , the number of hyperedges in any -uniform hypertree is always same .let and be the number of vertices and hyperedges in a -uniform hypertree .we show by induction on that .now take a -uniform hypertree with hyperedges . remove any of the hyperedges to get another hypergraph ( which may not be connected ) having only edges .this removal may introduce connected components ( sub - hypertrees ) ; .let these components have respectively number of hyperedges .therefore , .the total number of vertices in the new hypergraph ( with the sub - hypertrees as components ) , where is the number of vertices in the component .therefore , ( under induction assumption ) .now the number of vertices in the original hypertree , because vertices were already covered , one each in the components .therefore , .the result is thus true for and hence for any number of hyperedges by induction .this result implies that any -uniform hypertree on the same vertex set will always have the same number of hyperedges .take any vertex say .since and is a hypertree therefore connected , can not be an isolated vertex and therefore such that .take and .this proves our claim .therefore , such that they are in the same hyperedge in ( namely in ) but in necessarily different edges in , otherwise ( that is , if they lie in the same hyperedge in )
we develop graph theoretic methods for analysing maximally entangled pure states distributed between a number of different parties . we introduce a technique called _ bicolored merging _ , based on the monotonicity feature of entanglement measures , for determining combinatorial conditions that must be satisfied for any two distinct multiparticle states to be comparable under local operations and classical communication ( locc ) . we present several results based on the possibility or impossibility of comparability of pure multipartite states . we show that there are exponentially many such entangled multipartite states among agents . further , we discuss a new graph theoretic metric on a class of multi - partite states , and its implications .
an internationalization of market outlets , an individualization of consumer needs , a rapid implementation of technical innovation become a key factor of success in modern economic market . in order to survive in international competition , enterprises are forced to react dynamically to new requirements , to make permanent modifications and adaptation of own structures . in particularthis concerns the planning processes .a planning in manufacturing systems is traditionally organized top - down .the strategical level of planning transmits the results to the tactical level , which in turn triggers the operational level of planning process .the final result of the planning process is a detailed schedule of manufacturing to be implemented on the shop floor .every encountered disturbance , unforeseen in the schedule , triggers a new planning cycle .efforts , costs and time required for replanning can be essentially reduced by making the planning process adaptive to disturbances . in this waya modern flexible manufacturing requires a new approach that introduces elements of self - organization into operational control . taking into account a spatial distribution of manufacturing elements and requirement of flexibility to the whole system ,the concept of autonomous agents has found some applications in this field .this approach seems to be very promising to guarantee the required robustness , fault tolerance , adaptability and agility in the field of transformable manufacturing systems .application of agents to manufacturing requires also a development of new and adaptation of known approaches towards typical problems of multi - agents technology , like distributed problem solving , planning or collective decision making .this paper deals with the distributed constraint - based short - term planning ( assignment ) process supported by an multi - agent system ( mas ) .the assignment problem is often encountered in manufacturing , it is a part of operations research / management science ( or / ms ) . it can be classified into scheduling , resources allocation and planning of operations order ( e.g. ) .this is a classical -hard problem , there are known solutions by self - organization , combinatorial optimization , evolutionary approaches , constraint satisfaction and optimization , discrete dynamic programming . however these methods are developed as central planning approaches , the distributed or multi - agents planning for the assignment problem is in fact not researched ( overview e.g. in ) .generally , this problem consists in assigning a lot of low - level jobs ( like `` to produce one piece with defined specification '' ) to available machines so that all restrictions will be satisfied .the solution consists of four steps .the first step is to prove whether the machine is technological able to manufacture , whereas the second step is to prove whether the machine is organizational available , e.g. it is without prearrangements .the first two steps formalize the constraint problem , where the following criteria should be taken into account : * technical criteria determine necessary features of a piece and on this basis it can be decided whether a machine is able to manufacture this kind of feature e.g. the drilling feature . * technological criteria determine whether the machine can operate with a necessary quality or determine a technologically necessary order , e.g. the statutory tolerances . *geometrical criteria result from a geometrical description of the workpiece , e.g. whether a necessary chucking is possible . *organizational criteria are a set of specifications of production orders and machines .firstly , it defines whether a machine has an available time slot and , secondly , whether this time slot is suitable to manufacture the given production order . * optimization criteria like a cost or delivery time .all these criteria determine the agent - based process planning in the form of corresponding constraints , which reduce the decisions space in the assignment problem .the third and fourth steps of solution are the distributed constraint - satisfaction problem ( csp ) and composition / optimization ( cop ) correspondingly . in the considered example it needs to manufacture a multitude of parts and variants of production orders .totalling there are different types of workpieces ( 5 - 20 pieces of each type , see fig .[ fig_b5_ws_of_ws ] ) _ ] that have to be manufactured on available machines . table [ assignment2 ] shows an exemplary sequence of working steps , where all mentioned technological constraints are already considered ..__technological table for a workpiece , * ws * - working step , * l / m * - length / machine .zero in a length ( of a working step ) at corresponding machine means that this machine can not perform the requested operation .order of working steps means , that e.g. the steps 2,3,4,5,6 should be produced after the step 1 and before the step 7 .it is natural to assume these steps can not be performed at the same time on different machines . _ _ [ cols="^,^,^,^,^,^",options="header " , ] processing of each workpiece consists of several working steps ( defined by a technological process ) , all these working steps can not be processed on one machine .each from the working steps has different length and also cost .moreover each type of the pieces has own technology , i.e. a processing consists of different working steps .for simplification it is assumed that available machines are of the same type , therefore the cost and length of the same working step do not differ on these machines ( in general case they are different ) .the aim is to generate a plan of how to manufacture these workpieces with minimal cost , minimal time ( or other optimization criteria ) , taking into account restrictions summarized in table [ assignment2 ] .let us denote a working step as , where is a type of workpieces and a number of the working step , an available machine is denoted as , where is a number of machine .we need also to introduce a piece , where is a priority of production and is a number of this piece . in this way , denote a start and a final positions of the corresponding working step that belongs to the corresponding piece ( , for all pieces ) .we start with the definition of these values } _ {n \in [ 1 - 20 ] } ( ws_{j\in [ 1, ... ,11 ] } ^{i\in \ { a , b , c , d , e \ } } ) & = & o \in operation , \\m_{k \in \ { 1,2,3,4 \ } } & = & \{o \in operation \ } , \\st(p^m_n(ws^i_j))&=&\ { t\ge 0 , t \in r \ } , \\ fn(p^m_n(ws^i_j))&=&\{st(p^m_n(ws^i_j))+\\&+&length(p^m_n(ws^i_j ) ) \}.\end{aligned}\ ] ] the first constraint determines a correspondence between operations of working step and of the -machine the technological restrictions given by the table [ assignment2 ] can be rewritten in the following form } ) < st(ws^a_{[2 - 6 ] } ) ) \subset ws^a_j \times ws^a_j\ } , \\c_{3 } & = & \{(fn(ws^a_{[2 - 6 ] } ) < st(ws^a_{[7 ] } ) ) \subset ws^a_j \times ws^a_j\ } , \\c_{4 } & = & \{(fn(ws^a_{[7 ] } ) < st(ws^a_{[8 - 11 ] } ) ) \subset ws^a_j \times ws^a_j\},\end{aligned}\ ] ] where } ] }), ...,fn(ws^a_{[w ] } ) ] \neq \\ & \neq & j \in [ st(ws^a_{[w']}), ... ,fn(ws^a_{[w ' ] } ) ] ) \nonumber \\ & \subset & ws^a_j \times ws^a_j ; w , w'=8,9,10,11 ; w \neq w'\}.\end{aligned}\ ] ] priority of production can be expressed by as soon as the variable , the domains of values and constraints are defined , a propagation approach can be started .the aim is to restrict the values of variables ( or to find such values of variables ) that will satisfy all constraints .this propagation can be represented in the way shown in fig .[ constraint ] ._ ] all working steps that belong to the same workpiece build a sequence .every node in this sequence gets a `` finish''-position of a working step from the previous node . using this value, a current node looks for `` start''-positions of the next working step that satisfy local constraints , calculates `` finish''-positions and propagates them to the next node .if no position satisfying local constraint can be found , the node requests another `` finish''-position from the previous node . in this waythe network can determine locally consistent positions of all working steps .after that the obtained values should be tested for a global consistence .the csp approach described in the previous section is necessary and sufficient in solving the discussed kind of assignment problem .being implemented by one of programming techniques , it will generate the required plan .however working in a presence of disturbances ( like machine failure , technological change and so on ) requires additional efforts to adapt the planning approach to these changes .the principles of such an adaptation are not contained in the plan itself , an additional mechanism is needed .as mentioned in the introduction , the multi - agent concept can be used as such a mechanism that lends to manufacturing system more flexibility to adapt to disturbances .but what is the cause and cost of this additional feature ? there is a long discussion of this point based e.g. on the decentralization ( e.g. ) or several dynamics properties of mas ( e.g in ) .we would like to add the following argument into this discussion .the multi - agent system can be considered from a viewpoint of theory of finite - state automata .transition of -states automaton ( with or without memory , it does not change the matter ) from one state to another is determined by some rules ( by a program ) , therefore the automaton behaves in completely deterministic way .if a control cycle is closed ( see e.g. ) the automaton is autonomous , i.e. behaves independently of environment ( other automata ) .now consider a few such automata coupled into a system _ in the way that keeps their autonomy_. forasmuch as each automaton behaves according to own rules , there is no a central program that determines a states transition of the whole system . in the `` worst case '' coupling automatons with states ,the coupled system can demonstrate states .evidently this `` worst case '' has never to arise in the system , but how to control a behaviour of the distributed system without a central program ( without a centralized mediator ) ?the point is that all automata are continuously communicating in order to synchronize own states in regard to environment , to the solving task , etc .( in this case the notion of an automaton is replaced by the notion of an agent ) .the agents during communication `` consider '' all possible states and then `` choose '' such a state that is the most suitable to a currently solving problem .this is a main difference to the `` centralized programming '' approach .the central program can react only in such a way that was preprogrammed .for example 10 agents with 10 states can demonstrate different combinations .however no programmer is able to predict all situations to use all these states .thus , the `` centralized programming '' approach restricts the multi - agent system although there are essentially more abilities to react .the sufficient number of degrees of freedom represents a key problem of multi - agent technology . on the one handif the system is hard restricted in the behaviour , the advantage of mas is lost . on the other hand ,if the system has too many degrees of freedom it can communicate an infinitely long time .in other words only several combinations of agents states have a sense and the point is how to achieve and to manage these states .this is a hard problem arising in many branches of science and engineering and correspondingly there are several ways to solve it .the suggested here solution is based on a hierarchic software architecture that supports agent s autonomy . before starting to describe an approach, it needs to mention one methodological point concerning decentralization of multi - agent system , shown in fig .[ general_ideas ] . _ ] the mas solves a problem by using some methodological basis .for example the csp and cop approaches basically underlie the solution of constraint problems .the point is that a methodological basis , in almost all cases , is formulated in a centralized way .it looks like a `` battle plan '' , where all agents and their interactions are shown .therefore this global description is often denoted an _interaction pattern_. however the agents do not possess such a global point of view and the interaction pattern has to be distributed among agents . this decentralization concerns global information , messages transfer , synchronization , decision making and so on .the decentralized description of the chosen method should determine an individual activity of an agent as well as its interaction with other agents .it is also important that all agents behave in ordered way , i.e. to include cooperation mechanisms ( protocols ) into this distributed description . in order to enable a transition from the interaction pattern to the _ cooperation protocol _ ( see fig .[ general_ideas ] ) a notion of a role is introduced .a role is associated with a specific activity needed to be performed ( according to a methodological basis ) .agent can `` play ''one role or a sequence of roles . in this way interactionsare primarily determined between roles , an agent ( with corresponding abilities ) handles according to the role playing at the moment .an advantage of this approach is that the centralized description ( familiar for human thinking ) is preserved whereas the roles in the interaction pattern are `` in fact '' already distributed , i.e. a mapping `` agent - on - role '' may be performed in a formalized way by a program .thus , an interaction pattern is a `` mosaic image '' that from afar looks like a common picture ( method ) , but at a short distance is a set of separate fragments ( roles ) . moreovera concept of roles allows decoupling the structure of cooperation processes from agents organization , so that any modification of agents does not affect the cooperation process and vice versa .the interaction pattern determines a _ primary activity _ ( primary algorithm ) of multi - agent system .the primary algorithm includes also some parameters whose modifications can be commonly associated with disturbances .variation of these parameters does not disturb an activity of agents . in this casethese are the expected disturbances , a reaction of the system on them is incorporated into the primary algorithm . however due to specific disturbances every agent can reach such a state that is not described by a primary algorithm and where a performing of the next step is not possible . in this casethe agent is in emergency state and tries to resolve the arisen situation all alone or with an assistance of neighbour agents ( _ secondary activity _ ) .if the abilities of an agent are not sufficient or it requires additional resources it calls a rescue agent .the rescue agent is an agent that possesses specific ( usually hardware ) abilities . anyway, the aim of agents in emergency state is to change a part of the primary algorithm so that to adapt it to disturbances .the disturbances causing local emergency are expected ( predicted ) but not introduced into the primary algorithm .the primary algorithm as well as its parameterization is optimal only for specific conditions ( e.g. combinatorial / heuristic methods for solutions of combinatorial problems , csp / cop for constraints , etc . ) .if disturbances change these conditions the primary algorithm became non optimal and it has no sense to repair it .all agents have collectively to recognize such a global change and to make a collective decision towards replacement of the primary algorithm .this change corresponds to a global emergency .the disturbances causing the global emergency are not expected ( predicted ) , however they influence the conditions of primary algorithms and in this way can be recognized .finally , there are such disturbances that can not be absorbed by any changing of an algorithm , they remain irresolvable .forasmuch as all these points can not be considered in details in the framework of the paper , we restrict ourselves only by treating the mentioned primary algorithm in the language of cooperative processes by the modified petri nets .the secondary activity as well as emergency states and rescue agents are not considered here . in the case of an assignment planning ,the primary algorithm is determined by the csp approach described in sec .[ s_assignment_problem ] ( generally usage of constraint - based approaches for mas is not new , see e.g. ) .each working step in the approach is represented by a node in the constraint network shown in fig .[ constraint ] .these nodes are separated from one another , moreover their behaviour is determined by propagations .therefore it is natural to give a separate role ( r ) to each node .however , before starting a propagation , this network has to be created and parameterized by technology , machines , number of workpieces and so on . these two steps ( parameterization and propagation ) will be described by interaction patterns using corresponding roles . for description of agents activities the rope ( role oriented programming environment ) methodology is developed . using this approach ,the roles executed by agents are described in a formal way by the perti network .details of the rope system as well as description of cooperations protocols via perti networks can be found e.g. in , .as already mentioned , the primary algorithm consists of two parts , parameterization and propagation , that represent a linear sequence of activities . the parameterization part , shown in fig .[ cooparation_start ] _ ] has three phases , , whose main result consists in determining a structure , neighbourhood relations and parameterization of nodes of the constraint network . the roles are `` initializers '' of ws - order and ws - nodes correspondingly .the role is activated by the first production order , this role reads resource - objects and determines how much nodes ( ws - roles ) are required .the transition proves whether the result of is true ( action is successful ) and activates with parameter as a number of required nodes .the initializes each node according to all restrictions ( technology , propagation rules , number of machines and so on ) .if this activity is also successful ( transition ) , the third role is activated .it connects the created nodes ( return a pointer to previous node ) , composing in this way a network .this interaction plan is finished ( transition ) if there exists no next node needed to be connected .the propagation part , shown in fig .[ cooperatipo_propagation ] , _ ] consists of three blocks : local ( the phases ) , global ( the phase ) propagations and an activity ( the phase ) in the case of empty sets . the roles determine the propagation in the first and last nodes whereas does the same for all other nodes .the transition proves whether the local propagation was successful for all nodes and activates then the global propagation in .we emphasize the local propagation requires a sequential executing of roles whereas in global propagation all roles can be in parallel executed .finally , the transition proves whether the values set ( ws - positions ) of each node is empty . in the case of empty setsthe role tries to increase initial areas of values , first locally in neighbour nodes , then globally by restart of the local propagation .thus , agents , executing the roles described in the parameterization and propagation parts , `` know how to solve '' the csp problem . therefore all disturbances associated with a change of resources and constraints can be absorbed by a change of parameters in the interaction patterns . in this waythe mas has enough degrees of freedom to be adaptive to disturbances in a framework of the primary algorithm .the steps described in the previous sections allow generating the sequence of working steps that satisfies all local constraints . howeversuch global characteristics of a plan as cost , manufacturing time and so on are not considered .therefore , as pointed out by some authors , the next step consists in optimizing the obtained sequences . generally speaking, constraint satisfaction and optimization can not be separated into two different steps , rather , it represents a sui generis combination . before to start a discussion of agent - based optimization it needed to mention two features of such an approach .the first feature of agent - based optimization consists in a local character of used data .combinatorial or heuristic approaches assume the data , required to be optimized , are globally available .optimization in this case looks like a `` chess play '' where all pieces are visible and it needs to find some combination of pieces positions . in the multi - agent variation there is no this central viewpoint , each agent makes only a local decision about to occupy a positions or not ( see fig .[ beleg_plan_decisions ] ) . in this waythe agent performs optimization of local decisions instead of global positions of the working steps .moreover , from agents viewpoint , any of their decisions has not foreseeable perspectives for a global optimum .the second feature of agent - based optimization is caused by the local nature of optimization problem .each agent during the csp phase tries to occupy a position immediately after the previous working step .this strategy is motivated from the manufacturing side in trying to avoid a waiting time at processing elements ( machines ) .evidently , this strategy can not guarantee a global optimum .therefore the agent has to compute what will happen if the next processing step will begin not immediately after the previous step .it can be achieved by shifting a manufacturing of a workpiece on some steps that increase a local cost of a plan ( e.g. intermediate storage ) but reduce global costs ( see figs .[ eigen_plans0 ] , [ jumps ] ) .this approach ( forecasting ) is similar to a decision tree in distributed form .as known , an increasing of the depth of tree rapidly increases the search space . after discussing the features of agents - based optimization, one can focus on the problem of assignment plan .there are two important steps , that the optimization needs to be performed on .firstly , an order of the working steps in the group 2 and 4 ( see fig . [ constraint ] ) .forasmuch as there are only 2881 combination between - , this optimization step can be performed by exhaustive search .the second point of optimization concerns the local decisions ( concerning machine and position ) made by agents .however search space ( taking into account the forecasting effect ) grows in this case exponentially and e.g. even for 22 agent ( 2 production workpieces , forecast for next 5 positions ) comes to . therefore exhaustive search methods like constraints optimization are inefficient even on very fast computer .the search space can be essentially reduced if to take into account the following observation .the assignment planning for different workpieces represents an iterative process where all iterations are very similar to one another . in this way the whole assignment plan represents a periodic pattern , that can be observed in fig .[ eigen_plans0 ] .here there are two main patterns shown by black and white colors ( order of the working steps as well as their positions on machines ) that however differ in the last workpieces .it means that in case the optimal ( or near optimal ) scheme for the first iteration is found , next iteration can use the same scheme .the distributed approach being able to treat this kind of pattern - like problem is known as the ant colony optimization algorithm ( aco ) .this method originated from observation of ants in the colony .going from the nest to the food source , every ant deposits a chemical substance , called pheromone , on the ground . in the decision point( intersection between branches of a path ) ants make a local decision based on the amount of pheromone , where a higher concentration means a shorter path .the result of this strategy represents a pattern of routes where thick line points to a shorter path .similar strategy can be applied to local decisions of agents , participating in the plan making .agents after the csp approach choose several assignment plans from the generated set of them and form an optimization pool .these assignment plans can represent also only segments of plans ( these connected working steps represent independent parts of assignment plan ) that satisfy all formulated constraints .these segments / plans can be combined into a common plan so that to satisfy the postulated optimization criterion .thus , the more optimal segments are included into this pool , the more optimal common plan will be obtained .the aco algorithm marks ( like a pheromone rate ) the optimal segments obtained on the previous step . the fragments with the highest pheromone rate are included into the top of pool .in this way agents consider first the aco - obtained sequence and try to modify it ( e.g. using forecasting effect ) .thus , an optimization pool has always solutions with a high pheromone rate , from them the most optimal one will be then chosen .the optimality of a plan is also influenced by a number of transportations of a workpiece from one machine to another .these transportations are represented by so - called `` jumps '' in the plan making , as shown in fig .[ jumps ] .the minimal number of jumps for a workpiece is defined by technological requirements and e.g. for a plan shown in fig .[ eigen_plans0 ] is equal to 2 .however the number of jumps can be increased that worsens a cost but improves other characteristics of an assignment plan .this mechanism is utilized in combined optimization criteria , e.g. the minimal cost at defined length ( constant delivery date ) .dependence between the number of jumps and , for example , the length of generated plans is shown in fig .[ optim_plot ] .the presented approach enables to react reasonably to disturbances in manufacturing by using the constraint - based approach in a multi - agent way .it does not require any centralized elements , that essentially increases a reliability of common system .
nowadays , a globalization of national markets requires developing flexible and demand - driven production systems . agent - based technology , being distributed , flexible and autonomous is expected to provide a short - time reaction to disturbances and sudden changes of environment and allows satisfying the mentioned requirements . the distributed constraint satisfaction approach underlying the suggested method is described by a modified petri network providing both the conceptual notions and main details of implementation .
pursuing new ideas is a fundamental characteristic of our modern society , where brand - new goods are always ready to push their predecessors off the market .innovation is one of the most important keywords to understand our society in this sense , as earlier societies were shaped by traditional ideas to be conserved in an unaltered form as much as possible .for this reason , there have been extensive empirical economic and business studies on how innovations get started , diffused and approved in a society , and it is becoming an attractive topic in statistical physics as well . in a classical work about diffusion of innovations , rogers claimed that there is a common pattern in innovation dynamics , that people adopting an innovation are normally distributed in time . as a result, the cumulative number of adopters is expected to show an -shaped pattern over time , which is described by the error function : it grows slowly at first , expands rapidly at some point , and then slowly saturates to 100% .deviation from the mean adoption time , , over the entire population defines five adopter categories such as innovators ( , 2.5% ) , early adopters ( , 13.5% ) , the early majority ( , 34% ) , the late majority ( , 34% ) , and laggards ( , 16% ) , where is the standard deviation of adoption time .if normality was true , it might reflect variations in individual innovativeness , which is possibly an aggregate of numerous random events and is normally distributed over the population . however , this is a purely static picture of a non - communicating population and it is an implausible description of an innovative society . at the same time , rogers suggested a dynamic origin of this -shaped pattern by comparing it to an epidemic process .a relevant description is then more likely to be a logistic function ( see , e.g. , refs . ) than the error function .a logistic function is basically written as , which grows from zero to one as time goes from to . here, the assumption is that there is a _ single _ innovation like a disease , diffusing into a passive population .however , the problem with this approach is that ideas are evolving during the course of adoption , and innovation researchers are already well aware that people actively modify an adopted idea whenever it is possible and necessary , which is termed re - invention as a consequence , it is the rule rather than the exception that every modified innovation may well compete with all its predecessors , so the picture becomes more colorful than the dichotomy of a new idea versus an old one . in short, this epidemic description does not capture the genuine dynamic feature of innovations , and even more refined mathematical approaches such as the bass model do not overcome such limitations .this issue is also deeply related to the pro - innovation bias of diffusion research , which means that one tends to overlook such an innovation that dies out by rejection or replaced by a better one .although there have been statistical - physical approaches to introduce many competing ideas into the dynamics of innovation , they are rather focused on scaling behavior under specific stochastic rules than comparing the findings with empirical observations . to sum up , analytic concepts are lacking to explain actual patterns of innovation diffusion as a fully dynamic process with a multitude of ideas competing simultaneously .for this reason , we consider simple ideal competition among ideas whose values are so well - defined that everyone can adopt a better idea as soon as she encounters it , without any barriers against the diffusion of innovations .even if this picture is unrealistic , it is theoretically intriguing , and can serve as a reference point to start with when assessing innovations in practice . in particular , our results suggest that the interplay of adoption and exploration must be considered to achieve a plausible minimalist description , which leads to neither normal nor logistic but slightly skewed behavior as a signature of an ideal innovative society .this simple explanation is in contrast to many variants of the logistic growth model that describe asymmetry in empirical -shaped patterns .moreover , the analysis tells us that the speed of progress in ideas is coupled to how broadly ideas are distributed in the society : a fast innovating society tends to be accompanied by a broad spectrum of ideas , some of which can be far from state - of - the - art . it should be kept in mind that the term ` ideal ' is absolutely unrelated to any judgments of value concerning the phenomena that we are investigating but only means that we are considering a conceptual construct that can be pursued analytically .following ref . , we assume that every idea is assigned a scalar value representing its quality .this automatically implies that this quantity is transitive without any cyclic dominance among ideas , and the strict dominance relationship between any pair of distinct ideas prevents people from revisiting old ideas . a difference from ref . is that can take any _real _ value , not only an integer .let denote the fraction of the population choosing ideas between and at time .we then call a probability density function ( pdf ) of idea .our population dynamics approach on the mean - field level suggests that the relative growth rate is proportional to the fraction of those with as they are potential adopters of .this fraction is , by definition , the cumulative distribution function ( cdf ) and we thus have p(x , t ) , \label{eq : dp}\ ] ] where is a positive proportionality constant representing the rate of adoption , which can be set as unity by using a rescaled time , and is the average of over the population . note that the total probability is always conserved because = 0 ] .the speed of this wave is = g^{-1}(t/4)/t ] , for example , always evolves to a delta function at .this deficiency makes it difficult to gain insight on the innovation dynamics from the current formulation , revealing its incompleteness .the reason is that our current formulation does not include any generative mechanism for innovations .therefore , we add another term to the adoption dynamics considered so far . it could be argued that individual exploration for different ideas can be modeled more or less by a brownian random walk along the -axis : where is a measure of exploratory efforts . because it yields a normal distribution with variance ,this could be interpreted as invoking the classical idea of normality in the diffusion of innovations , but this normality enters as a consequence of the dynamic exploration process rather than a static trait. it also expresses a conservative viewpoint that an individual alone achieves only small modifications that may even degenerate equally .this is obviously a huge simplification about the human mind , but we shall be content with such a minimalist description at the moment . adding this exploratory mechanism to the adoption , the resulting equation is written as by rescaling and , we set both parameters and as unity . notably , eq . ( [ eq : diff ] )does not have a stationary solution for the following reason : when , the solution for eq .( [ eq : diff ] ) is given as weierstrass elliptic function , which is even and does not satisfy the boundary condition of at .this might look counter - intuitive at first glance as the pdf tends to converge to a single point due to adoption , which could be balanced by exploration .however , a more correct picture is that the pdf converges to a _ higher _ position than the center , so it gradually moves upward via exploration instead of staying at a fixed position .this notion turns out to be plausible as will be explained shortly below .if we consider the boundary condition , the actual equation to solve here is given as which can be shown identical to fisher s equation by simply changing the variables .fisher s equation was originally devised to describe the frequency of a single mutant gene in a one - dimensional population rather than a cdf , and it is interesting that the same equation arises in the context of an infinite series of mutants in an infinite - dimensional ( i.e. , mean - field ) population .this equation has been extensively studied in biology and physics as one of the simplest reaction - diffusion systems .we only mention the basics of the known results about fisher s equation and those who are interested in comprehensive discussions may refer to ref . and references therein . equation ( [ eq : fisher ] ) admits traveling wave solutions , and preserves the shapes during propagation .the traveling wave solutions are stable against small perturbations within a finite domain , moving with the waves .each speed builds up a unique wave shape , and speed is determined by the tail of the initial cdf in the following manner : if with as at initial time , the speed of the wavefront asymptotically converges to when , and when . in short , a longer tail leads to a faster propagating wave .even if an initial pdf has bounded support , i.e. , only for , a traveling wave solution will develop with instead of a delta function .the information on the initial condition other than the tail exponent becomes irrelevant in the asymptotic limit due to the random - walk process .there is no traveling wave solution below , which is consistent with the impossibility of a stationary solution as stated above .another important feature is that the characteristic width of the wavefront is proportional to because and compete to determine width .in contrast , speed is expressed as as both the mechanisms of exploration and adoption make positive contributions . as a consequence , the characteristic time for a wavefront to pass through a particular point is not sensitive to because .a fully analytic expression for a specific velocity is available as : \right.\nonumber\\ & & \left .-\tanh^2 \left [ \frac{x}{4\sqrt{3d / k } } - \frac{5k}{24}(t - t_0)\right ] \right\ } , \label{eq : special}\end{aligned}\ ] ] where is a reference point in time . asthis expression is handy to maintain qualitative features unaltered , we will focus on this solution to observe differences from the normal or logistic descriptions .the numbers presented here should be taken as indicating qualitative features of the solution , and not as universal values for arbitrary .the shape of the wave is obtained by differentiating eq .( [ eq : special ] ) with respect to , which is shown in fig .[ fig : diff](a ) at . as is clearly shown there, this pdf is not symmetric but skewed negatively , i.e. , with a longer tail on the left side .the skewness is quantified from the second and third moments as .due to this skewness , while the mean is , the maximum is located at .consequently , the most commonly observed idea tends to lead us to overestimate the population mean .recall the five categories defined with respect to the _ mean _ adoption time , which is given by our as \big/ \left [ \int_{-\infty}^\infty p(x=0;t ) dt \right ] = \frac{12}{5} ] [ see eq .( [ eq : special ] ) ] to the data points where is the saturation number at .the fitting parameters are .( b ) the same data shows larger deviations when fitted with the logistic function \right\}/2 ] ( blue ) .their best fitting parameters are and , respectively .( c ) broadband penetration rates in greece and the united kingdom ( uk ) from eurostat .the curves were obtained in the same way as above with eq .( [ eq : special ] ) , yielding for greece and for the uk.,scaledwidth=45.0% ].fitting results of eq .( [ eq : special ] ) to the broadband penetration rates from 2002 to 2010 in eu member countries . [ cols="^,^,^,^,^,^,^,^ " , ]
based on a recent model of paradigm shifts by bornholdt et al . , we studied mean - field opinion dynamics in an infinite population where an infinite number of ideas compete simultaneously with their values publicly known . we found that a highly innovative society is not characterized by heavy concentration in highly valued ideas : rather , ideas are more broadly distributed in a more innovative society with faster progress , provided that the rate of adoption is constant , which suggests a positive correlation between innovation and technological disparity . furthermore , the distribution is generally skewed in such a way that the fraction of innovators is substantially smaller than has been believed in conventional innovation - diffusion theory based on normality . thus , the typical adoption pattern is predicted to be asymmetric with slow saturation in the ideal situation , which is compared with empirical data sets .
since the mid - twentieth century , feedback control has played crucial roles in science and engineering .here , `` feedback '' means that a control protocol depends on measurement outcomes obtained from the controlled system .recently , feedback control has become increasingly important in terms of nonequilibrium physics , due to at least the following two reasons .first of all , stochastic aspects of thermodynamics have become important due to recent theoretical and experimental developments .theoretically , a number of nonequilibrium equalities such as the fluctuation theorem and the jarzynski equality have recently been found . on the other hand, experimental techniques have been developed to manipulate and observe small thermodynamic systems such as macromolecules and colloidal particles , and several nonequilibrium equalities have been experimentally verified . moreover ,artificial and biological molecular machines have been investigated . in these contexts ,feedback control is useful to realize intended dynamical properties of small thermodynamic systems , and it has become a topic of active research . secondly , feedback control sheds light on the foundations of thermodynamics and statistical mechanics concerning `` maxwell s demon '' .in fact , maxwell s demon performs measurement and feedback control on thermodynamic systems .recently , maxwell s demon has attracted renewed interest from the standpoints of modern information theory and statistical mechanics .a quintessential model of maxwell s demon is a single - particle heat engine proposed by l. szilard in 1929 . during the thermodynamic cycle of the szilard engine, the demon obtains bit ( nat ) of information by a measurement , performs feedback control , and extracts of positive work from a single heat bath . after numerous arguments on the consistency between the demon and the second law of thermodynamics, it is now understood that the work needed for the demon ( or equivalently the feedback controller ) during the measurement and information erasure compensates for the work that can be extracted by the demon .therefore , we can not extract a net positive work from the total system of the engine and the demon in an isothermal cycle , and therefore the presence of the demon does not contradict the second law of thermodynamics .nevertheless , of work extracted by the demon can be still useful . by using feedback control , we can increase the system s free energy without injecting any energy ( work ) to it .we stress that , without feedback control , we need the direct energy input into the system in order to increase its free energy due to the second law of thermodynamics .feedback control may be regarded as a powerful tool to control thermodynamic systems . sincethe crucial quantity is the information that is obtained to be used for feedback control , we may regard the szilard - type heat engine as `` information heat engine . ''recently , such an information heat engine was realized experimentally by using a colloidal particle . in this paper, we formulate a general theory of feedback control on stochastic thermodynamic systems .in particular , we extend recent theoretical results on the generalizations of the fluctuation theorem and the jarzynski equality to the situations in which the measurement and feedback control are non - markovian and there are multi - heat baths .our results serve as the fundamental building blocks of information heat engines .this paper is organized as follows . in sec .ii , we briefly review the framework of stochastic thermodynamics in a general setup .we discuss classical stochastic systems that are in general non - markovian and in contact with multi - heat baths .we discuss the concept of entropy production and the detailed fluctuation theorem as our starting point . because they are general properties of nonequilibrium systems , our formulations and results in the following sections are not restricted to langevin systems but applicable to any classical stochastic systems that satisfy the detailed fluctuation theorem . in sec .iii , we formulate measurements on thermodynamic systems .we discuss multi - measurements including continuous measurements , and investigate the properties of the mutual information obtained by the measurements . in particular , we introduce the two kinds of mutual information and , which will be shown to play key roles in the discussion of feedback control . in sec .iv , we discuss feedback control on markov and non - markov processes , and investigate feedback control in terms of probability theory , where the causality of the measurement and feedback play a crucial role . in sec .v , we derive the main results of this paper .we generalize the nonequilibrium equalities to situations in which the system is subject to feedback control . in particular , we derive two types of generalizations of the fluctuation theorem and the jarzynski equality .one involves a term concerning the mutual information , and the other involves a term of feedback efficacy . as corollaries ,we derive the generalizations of the second law of thermodynamics and a fluctuation - dissipation relation . in sec .vi , we illustrate our general results by two examples : a generalized szilard engine with measurement errors and a feedback - controlled ratchet . we discuss the former analytically and the latter numerically . in sec .vii , we conclude this paper . in appendixa , we discuss the physical meaning of entropy production to elucidate the physical contents of our results in two typical situations .in this section , we briefly review thermodynamics of classical stochastic systems and introduce notations that will be used later .we consider a classical stochastic system that is in contact with heat baths , , , at respective temperatures , , , . let be the phase - space point of system and be a set of external parameters such as the volume of a gas or the frequency of an optical tweezers .we control the system from time to with control protocol .let be a trajectory of the system . to formulate the stochastic dynamics , we discretize the time interval ] corresponds to .control protocol can also be discretized .let be the value of between and , where it is assumed to be constant during this time interval ( see fig . 1 ) .we denote the trajectory of from time to as .let be the value of parameter before time , which is not necessarily equal to because we can switch the value of the parameter at time .we also denote the value of after time as , which is not necessarily equal to , either ( see also fig . 1 ) .let ] is the initial distribution of .the initial distribution can be chosen as a stationary distribution under external parameters , as ] .we note that ] , which depends on the external parameters at time ( i.e. , ) .we note that ] can be replaced by ] just as ] the initial distribution of the backward processes .we stress that ] as ] be the heat that is absorbed by the system from the heat bath satisfying = -q_i[x^\dagger_n , \lambda^\dagger_n] ] simply as ] the probability of finding the entropy production in the forward processes , satisfying = \int \delta ( \sigma - \sigma [ x_n ] ) p [ x_n ] dx_n,\ ] ] where is the delta function . on the other hand , let ] , we obtain crooks fluctuation theorem }{p[\sigma ] } = e^{-\sigma}. \label{fluctuation_crooks}\ ] ] the detailed fluctuation theorem ( [ fluctuation2 ] ) or crooks fluctuation theorem ( [ fluctuation_crooks ] ) leads to the integral fluctuation theorem where the ensemble average is taken over all trajectories under forward protocol ( see eq .( [ ensemble_average ] ) ) . from the concavity of the exponential function, we obtain which is an expression of the second law of thermodynamics : the ensemble - averaged entropy production is non - negative . by taking the ensemble average of the logarithm of both sides of eq .( [ fluctuation2 ] ) , we have \ln \frac{p[x_n]}{p^\dagger[x_n^\dagger ] } , \label{relative1}\ ] ] which we will refer to as the kawai - parrondo - broeck ( kpb ) equality . the right - hand side of eq .( [ relative1 ] ) is the kullback - leibler divergence ( or the relative entropy ) of ] , which is always positive .therefore , eq. ( [ relative1 ] ) reproduces inequality ( [ second1 ] ) . if the probability distribution of is gaussian , the cumulant expansion of eq .( [ ensemble_average ] ) leads to a variant of fluctuation - dissipation relation which indicates that is determined by the fluctuation of .equality ( [ fdt1 ] ) is an expression of the fluctuation - dissipation theorem of the first kind , which gives a special case of the green - kubo formula . in the case of an isothermal process with a single heat bath, the entropy production reduces to = \beta ( w[x_n ] - \delta f),\ ] ] where ] .we perform a measurement on it and obtain outcome which is also a probability variable .the error of the measurement can be characterized by a conditional probability ] for all , where we note that the sum should be replaced by the integral if is a continuous variable .if the measurement is error - free , ] is independent of the probability distribution ] , and the probability of obtaining by = \sum_x p[x , y] ] , is given by the bayes theorem : = \frac{p[y|x]p[x]}{p[y]}. \label{bayes}\ ] ] we next discuss the information contents related to the measurement .the shannon information contents of the probability variables are given by \ln p[x ] , \h_y : = - \sum_y p[y ] \ln p[y],\ ] ] which characterize the randomnesses of and , respectively . on the other hand ,the mutual information content between and is given by [x : y],\ ] ] where : = \ln \frac{p[y|x]}{p[y]}.\ ] ] in this paper , we also call ] holds due to the bayes theorem ( [ bayes ] ) . the mutual information measures the amount of information obtained by the measurement .it is known that if the measurement is error - free , holds .we next formulate multiple measurements on nonequilibrium dynamics , and discuss the properties of the mutual information obtained by the measurements .let be the outcome at time .in this section , we assume the followings : 1 .the error of the measurement at time is characterizes by ] .this assumption is also justified in many real experimental situations .2 . the unconditional probability distribution of , ] , we call the measurement markovian , which means that the outcome is determined only by the system s state immediately before the measurement . this condition is satisfied if the measurements can be performed in a time interval that is sufficiently shorter than the shortest time scale of the system .we note that the markovness of the measurement is independent of that of the dynamics .we assume that the measurements are performed at times , , , , where .if , , , , hold , the measurement is time - continuous in the limit of , because the measurements are performed at all times .we write as the set of measurement outcomes that are obtained up to time , i.e. }) ] is the maximum satisfying . if the measurement is continuous , then .we define : = \prod_{k=1}^{m ' } p[y_{n_k } | x_{n_k } ] , \label{c}\ ] ] where is the maximum integer satisfying . without feedback , eq . ( [ c ] ) defines the conditional probability of obtaining outcomes under the condition of , while , with feedback , this interpretation of eq .( [ c ] ) is not necessarily correct as shown in the next section . to explicitly demonstrate this point and to distinguish ] reduces to ] equals the mutual information between trajectories and defined as : = \ln ( p[y_n | x_n ] / p[y_n]) ] .we then define : = \prod_{k=0}^{n-1 } p[y_k | x_k],\ ] ] which is to be compared with eq .( [ c ] ) .we then obtain the joint probability distribution of and with feedback control as & = \prod _ { k=0}^{n-1 } p[y_{k+1 } | x_{k+1 } ] p[x_{k+1 } | x_{k } , \lambda_{k } ( y_{k-1 } ) ] \\ & = p_{\rm c}[y_n | x_n ] p [ x_n | \lambda_n ( y_{n-1 } ) ] .\end{split } \label{joint_prob}\ ] ] we can check that dx_n dy_n = 1,\ ] ] by integrating and in eq . ( [ joint_prob ] ) in the order of , where the causality of measurements and feedback play crucial roles .the marginal distributions are given by = \int p[x_n , y_n ] dy_n , \p[y_n ] = \int p[x_n , y_n ] dx_n,\ ] ] and the conditional distributions by = \frac{p[x_n , y_n]}{p[y_n ] } , \p[y_n | x_n ] = \frac{p[x_n , y_n]}{p[x_n]}.\ ] ] we stress that , in the presence of feedback control , \neq p_{\rm c}[y_n | x_n]\ ] ] in general , because protocol depends on . on the other hand , without feedback control , = p_{\rm c}[y_n | x_n] ] is simply given by ] is given by p[x_n , y_n ] dx_n dy_n,\ ] ] and the conditional average under the condition of is given by p[x_n | y_n ] dx_n.\ ] ] equation ( [ prob1 ] ) still holds in the presence of feedback control : & : = \frac{p[x_n , y_n]}{p[x_n , y_{n-1 } ] } \\ & = \frac{p_{\rm c}[y_n | x_n ] p[x_n | \lambda_n ( y_{n-1})]}{p_{\rm c } [ y_{n-1 } | x_{n-1 } ] p[x_n | \lambda_n ( y_{n-1 } ) ] } \\ & = p[y_n | x_n ] .\end{split}\ ] ] we note that eq .( [ prob0 ] ) also holds with feedback control .we then define the mutual information in the same way as in the case without feedback control : & : = \sum_{k=1}^{m ' } i[y_{n_k } : x_{n_k } | y_{n_k-1 } ] \\ & = \ln \frac{p_{\rm c}[y_n | x_n]}{p[y_n]}. \end{split } \label{mutual_f}\ ] ] in the presence of feedback control , ] , because \neq p[y_n | x_n] ] under the condition of initial .we next perform backward experiments with protocol , where was chosen in the forward experiments .we stress that we do not perform any feedback in the backward experiments : is just the time - reversal of .we then obtain ] which in general depends on the measurement outcomes in the forward experiments. a natural choice of ] .then we have }{p[x_n | \lambda_n ( y_n ) ] } = \exp \left ( - \sigma [ x_n , \lambda_n ( y_n ) ] \right),\ ] ] where : = & - \ln p^\dagger_0 [ x^\dagger_0 | y_n ] + \ln p_0 [ x_0 ] \\ & - \sum_i \beta_i q_i [ x_n , \lambda_n ( y_{n-1 } ) ] .\end{split } \label{entropy_f1}\ ] ] if there is a single heat bath and the initial distributions of the forward and backward experiments are given by the canonical distributions , then the entropy production reduces to = \beta ( w[x_n , \lambda_n ( y_n ) ] - \delta f [ y_n]),\ ] ] where the free - energy difference can depend on the measurement outcomes as : = f(\lambda_{\rm fin } ( y_n ) ) - f(\lambda_{\rm int}) ] & entropy production .+ & measurement outcome at time .+ & measurement outcomes from time to . + & time - reversal of , i.e. with .+ & protocol of feedback control with outcomes .+ ] & probability density of obtaining under the condition of , which characterizes the measurement error .+ ] .+ ] .+ ] & sum of the conditional mutual information : : = \prod_k i[y_{n_k } : x_{n_k } | y_{n_k-1}] ] and ] and ] needs to be satisfied for all . to explicitly see this , we write = : \varepsilon > 0 ] never occur . therefore ,if = 0 ] for some , and also obtain eqs .( [ relative_f1 ] ) , ( [ fdt_f1 ] ) , and inequalities ( [ second_f1 ] ) , ( [ second_f2 ] ) , ( [ second_f3 ] ) .we next derive a different type of nonequilibrium equality . in this subsection , we assume that the measurements are markovian ( i.e. , = p[y_n | x_n] ] is the `` renormalized '' ( or `` coarse - grained '' ) entropy production defined as & : = -\ln \langle e^{-\sigma } \rangle_{y_n } \\ & = -\ln \int dx_ne^{-\sigma [ x_n , \lambda_n(y_{n-1 } ) ] } p[x_n | y_n ] . \end{split } \label{entropy_f2}\ ] ] equality ( [ fluctuation_f3 ] ) implies that the detailed fluctuation theorem retains its form under the coarse - graining , if we introduce the appropriate coarse - grained entropy production . from the concavity of the exponential function , we obtain \leq \langle \sigma \rangle_{y_n} ] and the detailed fluctuation theorem ( [ fluctuation_f1 ] ), we have } & = \int dx_n \frac{p[x_n^\dagger | \lambda_n ( y_{n-1})^\dagger]}{p[x_n | \lambda_n ( y_{n-1 } ) ] } p[x_n | y_n ] \\ & = \int dx_n \frac{p[x_n^\dagger | \lambda_n ( y_{n-1})^\dagger]}{p[x_n | \lambda_n ( y_{n-1 } ) ] } \frac{p[x_n , y_n]}{p[y_n ] } \\ & = \frac{1}{p[y_n ] } \int dx_n p[x_n^\dagger | \lambda_n ( y_{n-1})^\dagger ] p_{\rm c}[y_n | x_n ] \\ & = \frac{1}{p[y_n ] } \int dx_n p[x_n^\dagger | \lambda_n ( y_{n-1})^\dagger ] p_{\rm c}[y_n^\dagger | x_n^\dagger ] .\end{split}\ ] ] in the last line , we used the time - reversal symmetry ( [ time_symmetry ] ) of the measurements . by noting eq .( [ prob2 ] ) , we obtain ( [ fluctuation_f3 ] ) .we note that eq .( [ fluctuation_f3 ] ) holds regardless of the presence of feedback control . without feedback control , eq .( [ fluctuation_f3 ] ) reduces to }{p[y_n ] } = e^{- \sigma ' [ y_n ] } .\label{fluctuation_f4}\ ] ] by taking the ensemble average of both sides of eq .( [ fluctuation_f3 ] ) and noting that holds , we obtain the second generalization of the integral fluctuation theorem where is the efficacy parameter of feedback control defined as dy_n^\dagger , \label{gamma}\ ] ] which is the sum of probabilities of obtaining the time - reversed outcomes by the time - reversed measurements during the time - reversed protocols ( see fig . 3 ) .if holds , eq .( [ integral_f2 ] ) leads to the second generalization of the jarzynski equality : if the feedback control in the forward processes is `` perfect , '' the particle is expected to return to its initial state with unit probability in the backward processes . in such a case, takes the maximum value that equals the number of possible outcomes of .in fact , for the case of the szilard engine , holds corresponding to and .in contrast , without feedback control , reduces to as dy_n^\dagger = 1,\ ] ] which vindicates the original integral fluctuation theorem .therefore , the measurements in the backward processes are used to characterized to the efficacy of feedback control in the forward processes . with forward protocol .( b ) backward outcomes with backward protocol ( ).,width=302 ] we stress that and can be measured independently , because is obtained from the forward experiments with feedback and is obtained from the backward experiments without feedback .therefore , eqs .( [ integral_f2 ] ) and ( [ jarzynski_f2 ] ) can be directly verified in experiments .in fact , eq . ( [ jarzynski_f2 ] ) has been verified in a real experiment by using a feedback - controlled ratchet with a brownian particle . from eq .( [ fluctuation_f2 ] ) , we have the second generalization of the second law of thermodynamics the equality in inequality ( [ second_f4 ] ) is achieved if does not fluctuate .we note that , if the distribution of is gaussian , we have a generalized fluctuation - dissipation theorem while the first generalization ( [ integral_f1 ] ) only involves the term of the obtained information , the second generalization ( [ integral_f2 ] ) involves the term of feedback efficacy .to understand the relationship between the mutual information and the feedback efficacy , we introduce the notation : = -\ln \langle e^{-a } \rangle\ ] ] for any probability variable .we note that , if can be written as with being a real number and being another probability variable , then ] in eq .( [ integral_f2 ] ) , = 0 ] holds as in eq .( [ integral_f1 ] ) . equality ( [ correlation1 ] ) implies that is a measure of the correlation between and .this can be more clearly seen by the cumulant expansion of eq .( [ correlation1 ] ) if the joint distribution of and is gaussian : therefore , characterizes how efficiently we use the obtained information to decrease the entropy production by feedback control : if is large , the more we obtain , the less is .we can also derive another nonequilibrium equality which also gives us the information about the feedback efficacy . by taking logarithm of the both sides of eq .( [ fluctuation_f2 ] ) , we obtain \ln \frac{p[y_n]}{p[y_n^\dagger | \lambda_n ( y_{n-1})^\dagger ] } , \label{relative_f2}\ ] ] which is a generalization of eq .( [ relative1 ] ) .the same result under a different situation has also been obtained in ref .equality ( [ relative_f2 ] ) implies that the renormalized entropy production equals the kullback - leibler divergence - like quantity between the forward probability ] .in fact , without feedback control , the right - hand side of eq .( [ relative_f2 ] ) reduces to the kullback - leibler divergence between ] and therefore the both sides of eq .( [ relative_f2 ] ) are positive , which is consistent with the second law of thermodynamics . on the contrary , in the presence of feedback control , the right - hand side is no longer the kullback - leibler divergence , because ] and = p[1|0 ] = \varepsilon ] is given by when , when , when , and when .therefore we obtain which confirms eq .( [ jarzynski_f1 ] ) .we next consider the second generalization ( [ jarzynski_f2 ] ) of the jarzynski equality .corresponding to two measurement outcomes , we have two backward control protocols as follows ( see also fig . 5 ) .initial state . _the initial state of the backward control is in the thermal equilibrium .insertion of the barrier ._ corresponding to step 5 of the forward process , we insert the barrier and decide the box into two boxes , because the time - reversal of the barrier removal is the barrier insertion .corresponding to or in the forward process , we divide the box with the ratio or , respectively ._ step 3 . moving the barrier ._ we next move the barrier to the middle of the box quasi - statically and isothermally .this is the time - reversal of the feedback control in step 4 of the forward process ._ we perform the measurement to find in which box the particle is in .corresponding to the backward protocol with , we obtain the outcomes of backward measurement with probability = v_0 ( 1 - \varepsilon ) + ( 1 - v_0 ) \varepsilon ] . on the other hand ,corresponding to the backward protocol with , we obtain the outcomes of backward measurement with probability = v_1 \varepsilon + ( 1 - v_1 ) ( 1- \varepsilon) ] .removal of the barrier ._ we remove the barrier and the system returns to the initial state .this is the time - reversal of the barrier insertion in step 2 of the forward process .that denotes the measurement outcomes in the forward process , we have two control protocols in the backward process , where denotes the measurement outcomes in the backward process.,width=283 ] from step 4 of the backward process , we have + p [ y'= 1 | \lambda ( y=1)^\dagger ] \\ & = ( 1 - \varepsilon ) ( v_0 + v_1 ) + \varepsilon ( 2 - v_0 - v_1 ) . \end{split}\ ] ] on the other hand , we can straightforwardly obtain which confirms eq .( [ jarzynski_f2 ] ) .we next discuss a model for brownian motors , in particular a feedback - controlled ratchet .we consider a rotating brownian particle with a periodic boundary condition .let be the position or the angle of the particle , and its boundary condition is given by with being a constant . in the following , we restrict the particle s position to .we assume that the particle obeys the overdamped langevin equation eq .( [ langevin ] ) , and that control parameter takes two values ( or ) .corresponding to them , the ratchet potential takes the following two profiles ( fig .6 ) : where is a constant with , and is a positive constant that characterizes the height of the potential .corresponding to .,width=302 ] we start with the initial equilibrium with parameter , and control the system from time to with the following three protocols . 1 ._ trivial control ._ we do not change the parameter .2 . _ flashing ratchet ._ at times with being integers and being a constant , we switch parameter from to or from to periodically .3 . _ feedback - controlled ratchet ._ at times , we switch the parameter with the following feedback protocol .we measure the position at without error .we then set from to if and only if the outcome is in .otherwise , parameter is set to . for numerical simulations , we set , , , and , with units , , and .we performed the simulations by discretizing eq .( [ langevin ] ) with for samples .we note that , to obtain the initial thermal equilibrium , we waited and checked that the system was fully thermalized in the periodic ratchet with parameter . the time evolution of the ensemble average is plotted in fig .7 ( a ) for the above three protocols .as expected , nothing happens for the first protocol , while the particle is transported to the right on average for the second and third protocols . in the case of the feedback - controlled ratchet ,the particle is transported to the right faster than the case of the flashing ratchet .figure 7 ( b ) shows the time evolution of the work that is performed on the particle .the work is induced only in the switching times .we find that , in order to transport the particle , the energy input to the particle with feedback control is smaller than that with the flashing .corresponding to the three control protocols : the trivial control , the flashing ratchet , and the feedback - controlled ratchet .( b ) numerical result of the ensemble average of the work corresponding to the flashing ratchet and the feedback - controlled ratchet.,width=302 ] figure 8 shows the left - hand side of the jarzynski equality for the flashing and feedback - controlled ratchet , and the efficacy parameter for the feedback - controlled ratchet .we note that always holds . with feedback control , increases from as the number of switchings increases , while , without feedback control , converges to for all switching times in consistent with the original jarzynski equality . on the other hand , to obtain , we numerically performed the backward experiments .the discretization of the time is , and the number of the samples is for each trajectory of .we note that the number of the trajectories of is given by with times of switchings .figure 8 shows a good coincidence between and , which confirms the validity of eq .( [ jarzynski_f2 ] ) in the feedback - controlled ratchet . ) for the feedback - controlled ratchet.,width=332 ]in this paper , we have studied the effects of measurements and feedback control on nonequilibrium thermodynamic systems . in particular , we have generalized nonequilibrium equalities to the systems that are subject to feedback control .our formulations and results are applicable to a broad class of classical nonequilibrium systems . in sec .ii , we reviewed stochastic thermodynamics , by focusing on the nonequilibrium equalities . in sec .iii , we formulated measurements on nonequilibrium systems , and defined mutual information by ( [ mutual1 ] ) for multi - measurements . in sec .iv , we formulated feedback control on nonequilibrium systems .we discussed the properties of the joint probability ( [ joint_prob ] ) , which is well - defined due to causality .we introduced the mutual information by ( [ mutual_f ] ) , which is not equivalent to in the presence of feedback control .in fact , describes the correlation between the system and the outcomes , which characterizes the effective information obtained by the measurements .we have also shown that the detailed fluctuation theorem ( [ fluctuation_f1 ] ) holds in the presence of feedback control .section v constitutes the main results of this paper .we derived two types of generalizations of the nonequilibrium equalities . in sec .v a , we derived a generalized detailed fluctuation theorem ( [ fluctuation_f2 ] ) which involves the mutual information .based on eq .( [ fluctuation_f2 ] ) , we derived the generalizations of the integral fluctuation theorem ( [ integral_f1 ] ) , the jarzynski equality ( [ jarzynski_f1 ] ) , the second laws ( [ second_f1 ] ) ( [ second_f2 ] ) ( [ second_f3 ] ) , the fluctuation - dissipation theorem ( [ fdt_f1 ] ) , and the kpb equality ( [ relative_f1 ] ) that all involve the mutual information . in sec .v b , we derived the renormalized detailed fluctuation theorem ( [ fluctuation_f3 ] ) , and derived the generalizations of the integral fluctuation theorem ( [ integral_f2 ] ) , the jarzynski equality ( [ jarzynski_f2 ] ) , the second law ( [ second_f4 ] ) , the fluctuation - dissipation theorem ( [ fdt_f2 ] ) , and the kpb equality ( [ relative_f2 ] ) .we have shown that mutual information , rather than , plays the crucial role to formulate the nonequilibrium equalities under feedback control .these results are the generalizations of the fundamental equalities in nonequilibrium statistical mechanics to feedback - controlled processes , and lead to the generalized second law of thermodynamics with feedback control , which gives the minimal energy cost that is needed for the feedback control . in sec .vi , we discussed simple examples to explicitly show that our results in sec .v can be applied to typical situations . in sec .vi a , we discussed the szilard engine with measurement errors that achieves the equality of the generalized second law of thermodynamics ( [ second_f2 ] ) or ( [ second_f3 ] ) .this is an important model to quantitatively illustrate that the mutual information can be converted to the work .we also confirmed the two generalized jarzynski equalities ( [ jarzynski_f1 ] ) ( [ jarzynski_f2 ] ) in the generalized szilard engine . in chap .vi b , we considered a feedback - controlled ratchet and confirmed a generalized jarzynski equality ( [ jarzynski_f2 ] ) .all of our formulations and results are consistent with the original nonequilibrium equalities and the second law of thermodynamics , and our results serve as the fundamental principle of nonequilibrium thermodynamics of feedback control .we note that , in our results such as eq .( [ fluctuation_f2 ] ) , the thermodynamic quantities and the information contents are treated on an equal footing .therefore , our theory may be regarded as the nonequilibrium version of `` information thermodynamics '' , which serves as the fundamental theory of nonequilibrium information heat engines .in this appendix , we discuss the physical meanings of the entropy production in the following two typical setups to clarify the typical situations to which our results apply . _ isothermal processes ._ we assume that there is a single heat bath at temperature , and that the initial distributions of both forward and backward experiments are in the canonical distributions .we stress that we do not assume that the final distributions of both the forward and backward experiments are in the canonical distributions : the final distribution of the forward ( backward ) experiments does not necessarily equal the initial distribution of the backward ( forward ) experiments .let be the hamiltonian of the system with the time symmetry . the canonical distribution with parameter is given by : = e^{\beta ( f(\lambda ) - h ( x , \lambda))},\ ] ] where is the helmholtz free energy .in this situation , the entropy production reduces to = \beta ( w[x_n ] - \delta f),\ ] ] where : = h(x_n , \lambda_{\rm fin } ) - h(x_0 , \lambda_{\rm int } ) - q [ x_n]\ ] ] is the work performed on the system from the external parameter , and is the free - energy difference . in this case , eq .( [ integral1 ] ) leads to the jarzynski equality ( [ jarzynski1 ] ) , and the second law ( [ second1 ] ) reduces to inequality ( [ second_work ] ) ._ transition between arbitrary nonequilibrium states : _ we assume that there are several heat baths , and that we can control the strength of interaction between the system and the baths through . in other words , we can attach or detach the system from the baths by controlling ; for example , we can attach an adiabatic wall to the system .we set an arbitrary initial distribution ] , where $ ] is the final distribution of the forward experiments .although this choice of the backward initial state is artificial and is difficult to be experimentally realized except for special cases , this backward initial state is a theoretically useful tool to derive a version of the second law of thermodynamics as follows . in this case , the entropy production is given by = - \ln p_n [ x_n ] + \ln p_0 [ x_0 ] - \sum_i \beta_i q_i [ x_n],\ ] ] and its ensemble average leads to where \ln p_n [ x_n ] dx_n\ ] ] is the shannon entropy at time . by introducing notation ,the second law ( [ second1 ] ) leads to we are grateful to y. fujitani , h. hayakawa , h. hasegawa , j. m. horowitz , s. ito , k. kawaguchi , t. s. komatsu , n. nakagawa , k. saito , m. sano , s. sasa , h. suzuki , h. tasaki , and s. toyabe for valuable discussions .this work was supported by a grant - in aid for scientific research on innovative areas `` topological quantum phenomena '' ( kakenhi 22103005 ) from the ministry of education , culture , sports , science and technology ( mext ) of japan , and by a global coe program `` physical science frontier '' of mext , japan .ts acknowledges jsps research fellowships for young scientists ( grant no .208038 ) and the grant - in - aid for research activity start - up ( grant no .11025807 ) .j. c. doyle , b. a. francis , and a. r. tannenbaum , `` _ _ feedback control theory _ _ , '' ( macmillan , new york , 1992 ) .k. j. strom and r. m. murray , `` _ _ feedback systems : an introduction for scientists and engineers _ _ , '' ( princeton university press , 2008 ) .d. j. evans , e. g. d. cohen , and g. p. morriss , phys .71 * , 2401 ( 1993 ) .g. gallavotti , and e. g. d. cohen , phys .lett . * 74 * , 2694 ( 1995 ) . c. jarzynski , phys .lett . * 78 * , 2690 ( 1997 ) .j. kurchan , j. phys .gen . * 31 * , 3719 ( 1998 ) .g. e. crooks , j. stat .phys . * 90 * , 1481 ( 1998 ) .g. e. crooks , phys .e * 60 * , 2721 ( 1999 ) .j. l. lebowitz and h. spohn , j. stat .phys . * 95 * , 333 ( 1999 ) . c. maes , j. stat .phys . * 95 * , 367 ( 1999 ) . c. maes , f. redig , and a. van moffaert , j. math. phys . * 41 * 1528 ( 2000 ) . c. jarzynski , j. stat* 98 * , 77 ( 2000 ) .j. kurchan , arxiv : cond - mat/0007360 ( 2000 ) .h. tasaki , arxiv : cond - mat/0009244 ( 2000 ) . t. hatano and s .-sasa , phys .lett . * 86 * , 3463 ( 2001 ) .d. j. evans and d. j. searles , adv .phys . * 51 * , 1529 ( 2002 ) .r. van zon and e. g. d. cohen , phys .* 91 * , 110601 ( 2003 ) . c. jarzynski , j. stat .mech : theor .p09005 ( 2004 ) . c. jarzynski and d. k. wjcik , phys .lett . * 92 * , 230602 ( 2004 ) .t. harada and s .-sasa , phys .lett . * 95 * , 130602 ( 2005 ) .u. seifert , phys .lett . * 95 * , 040602 ( 2005 ) .m. esposito and s. mukamel , phys .e * 73 * , 046129 ( 2006 ) .d. andrieux and p. gaspard , j. stat .p02006 ( 2007 ) .k. saito and a. dhar , phys .* 99 * , 180601 ( 2007 ) .t. ohkuma and t. ohta j. stat .p10010 ( 2007 ) .r. kawai , j. m. r. parrondo , and c. van den broeck , phys .lett . * 98 * , 080602 ( 2007 ) .a. gomez - marin , j. m. r. parrondo , and c. van den broeck , phys . rev .e * 78 * , 011107 ( 2008 ) .t. s. komatsu and n. nakagawa , phys .lett . * 100 * , 030601 ( 2008 ) .t. s. komatsu , n. nakagawa , s. i. sasa , and h. tasaki , phys .* 100 * , 230602 ( 2008 ) .y. utsumi and k. saito , phys .b * 79 * , 235311 ( 2009 ) .m. campisi , p. talkner , and p. hnggi , phys .lett . * 102 * , 210401 ( 2009 ) .j. ren , p. hnggi , and b. li , phys .lett . * 104 * , 170601 ( 2010 ) .h. hasegawa , j. ishikawa , k. takara , and d.j .driebe , phys .a * 374 * , 1001 ( 2010 ) .m. esposito and c. van den broeck , phys .* 104 * , 090601 ( 2010 ) .m. campisi , p. talkner and p. hnggi , phys .105 , 140601 ( 2010 ) s. vaikuntanathan and c. jarzynski , euro .. lett . * 87 * , 60005 ( 2010 ) .g. m. wang _et al . _ ,rev . lett . * 89 * , 050601 ( 2002 ) .j. liphardt _et al . _ ,science * 296 * , 1832 ( 2002 ) .e. h. trepagnier _et al . _ ,* 101 * , 15038 ( 2004 ) .d. m. carberry _ et al .* 92 * , 140601 ( 2004 ) .d. collin _ et al ._ , nature * 437 * , 231 ( 2005 ) .f. douarche _et al . _ ,* 97 * , 140603 ( 2006 ) .d. andrieux _ et al .lett . * 98 * , 150601 ( 2007 ) .s. toyabe _et al . _ ,e * 75 * , 011122 ( 2007 ) .s. toyabe _lett . * 104 * , 198103 ( 2010 ) .k. hayashi _et al . _ ,lett . * 104 * , 218103 ( 2010 ) .s. nakamura _et al . _ ,lett . * 104 * , 080602 ( 2010 ) .v. serreli __ , nature * 445 * , 523 ( 2007 ) .s. rahav , j. horowitz , and c. jarzynski , phys .* 101 * , 140602 ( 2008 ) .e. r. kay , d. a. leigh , and f. zerbetto , angew .chem . * 46 * , 72 ( 2007 ) .et al . _ ,nature * 465 * , 202 ( 2010 ) .f. j. cao , l. dinis , j. m. r. parrondo , phys .* 93 * , 040603 ( 2004 ) .k. h. kim and h. qian , phys .e * 75 * , 022102 ( 2007 ) .b. j. lopez _* 101 * , 220601 ( 2008 ) .f. j. cao and m. feito , phys .e * 79 * , 041118 ( 2009 ) .m. feito , j. p. baltanas , and f. j. cao , phys .e * 80 * , 031128 ( 2009 ) .m. bonaldi _et al . _ ,lett . * 103 * , 010601 ( 2009 ) .h. suzuki and y. fujitani , j. phys .78 * , 074007 ( 2009 ) .t. sagawa and m. ueda , phys .104 * , 090602 ( 2010 ) .y. fujitani and h. suzuki , j. phys .79 * , 104003 ( 2010 ) .t. brandes , phys .lett . * 105 * , 060602 ( 2010 ) .m. ponmurugan , phys .e * 82 * , 031129 ( 2010 ) .j. m. horowitz and s. vaikuntanathan , phys .e * 82 * , 061120 ( 2010 ) .y. morikuni and h. tasaki , j. stat . phys .* 13 * , 1 ( 2011 ) d. abreu and u. seifert , euro .lett . * 94 * , 10001 ( 2011 ) . j. m. horowitz and j. m. r. parrondo , euro .. lett . * 95 * , 10005 ( 2011 ) .m. esposito and c. van den broeck , euro .. lett . * 95 * , 40004 ( 2011 ) .s. vaikuntanathan and c. jarzynski , phys .e * 83 * , 061120 ( 2011 ) .t. sagawa , j. phys .: conf . ser . *297 * , 012015 ( 2011 ) .s. lahiri , s. rana , and a. m. jayannavar , arxiv:1109.6508 ( 2011 ) .d. v. averin , m. mottonen , and j. p. pekola , arxiv:1108.5435 ( 2011 ) . j. m. horowitz and j. m. r. parrondo , arxiv:1110.6808 ( 2011 ) .j. c. maxwell , `` _ _ theory of heat _ _ , '' ( appleton , london , 1871 ) . _ `` maxwell s demon 2 : entropy , classical and quantum information , computing '' _ , h. s. leff and a. f. rex ( eds . ) , ( princeton university press , new jersey , 2003 ) .l. szilard , z. phys .* 53 * , 840 ( 1929 ) . l. brillouin , j. appl . phys . * 22 * , 334 ( 1951 ) . c. h. bennett , int .* 21 * , 905 ( 1982 ) .r. landauer , ibm j. res . dev . * 5 * , 183 ( 1961 ) .s. lloyd , phys .a * 39 * , 5378 ( 1989 ) .s. lloyd and w. h. zurek , j. stat . phys .* 62 * , 819 ( 1991 ) .s. lloyd , phys . rev . a * 56 * , 3374 ( 1997 ) . h. touchette and s. lloyd , phys .* 84 * , 1156 ( 2000 ) . w. h. zurek , quant - ph/0301076 ( 2003 ) . m. o. scully _et al . _ , science * 299 * , 862 ( 2003 ) .t. d. kieu , phys .* 93 * , 140403 ( 2004 ) .a. e. allahverdyan _et al . _ , j. mod .optics , * 51 * , 2703 ( 2004 ) . h. t. quan _et al . _ ,* 97 * , 180402 ( 2006 ) .m. a. nielsen , c. m. caves , b. schumacher , and h. barnum , proc .london a , * 454 * , 277 ( 1998 ) .t. sagawa and m. ueda , phys .lett . * 100 * , 080403 ( 2008 ) .k. jacobs , phys .a * 80 * , 012322 ( 2009 ) .t. sagawa and m. ueda , phys .102 * , 250602 ( 2009 ) ; phys .106 * , 189901(e ) ( 2011 ) .k. maruyama , f. nori , and v. vedral , rev .phys . * 81 * , 1 ( 2009 ) .s. w. kim , t. sagawa , s. de liberato , and m. ueda , phys .lett . * 106 * , 070401 ( 2011 ) .g. welch and g. bishop , `` _ _ an introduction to the kalman filter _ _ '' , technical report tr 95 - 041 , university of north carolina , department of computer science ( 1995 ) .d. p. bertsekas , `` _ _ dynamic programming and optimal control _ _ '' , ( athena scientific 2005 ) .r. d. vale and f. oosawa , adv .* 26 * , 97 ( 1990 ) .f. julicher , a. ajdari , and j. prost , rev .phys . * 69 * , 1269 ( 1997 ) .j. m. r. parrondo , b. j. de cisneros , appl .phys . a * 75*,179 ( 2002 ) .p. reimann .phys . rept . *361 * , 57 ( 2002 ) .
we establish a general theory of feedback control on classical stochastic thermodynamic systems , and generalize nonequilibrium equalities such as the fluctuation theorem and the jarzynski equality in the presence of feedback control with multiple measurements . our results are generalizations of the previous relevant works to the situations with general measurements and multi - heat baths . the obtained equalities involve additional terms that characterize the information obtained by measurements or the efficacy of feedback control . a generalized szilard engine and a feedback - controlled ratchet are shown to satisfy the derived equalities .
physicists are happy . their resources amount of problems to solve is infinite .it is not so , however , in almost all other professions ; the numbers of car buyers , voters , butterflies to catch and girls to kiss for the first time are limited . if one takes everything , others have to look for another hobby or job . on the other hand , cooperation is a fingerprint of modern society .all what we get except from our domestic gardens , we get from other people .the aim of this work is to present numerical results of a set of equations , where the above facts are built in as assumptions .the only way to increase one s power or richness or speed of getting resources , which are treated here as synonymous , is to profit work of somebody else .then , each agent has neighbours who feed its with given speeds .the limitation of resources is taken into account as the global coupling via a nonlinear verhulst - like term .these are first two r.h.s .terms of our basic equations , and these terms do not depend on .third term is to include some action of an -th agent .namely , it selects from his neighbours the one most sensitive for this action , and enhances feeding from this particular neighbour .the problem belongs to the large class of models designed to describe a competition for resources .most frequently , however , some kind of dynamic equilibrium is considered , and only a few authors are interested in an ultimate catastrophe .example giving , although statistical physics provides tools for analysing stock market , the term `` bankruptcy '' is absent in physical journals ( see and for exceptions ) .it seems worthwhile to take a glance on a collision of expanding society with the boundary of limited resources .time evolution of the individual agent power is described by differential equation : ^ 2\\ + \lambda_3\cdot\max_{1\le j \le m}\left[d_{ij}p_j(t)r_j(t)\right ] \end{split } \label{eq1}\ ] ] coupling constant ( speed of feeding ) and ( sensitivity ) are random positive reals normalized to unity and fixed during simulation , while , and describe intensities of the three terms .the constants are equal to one for active agents and to zero in other case . at the beginning , all agents are active , but once is negative for given , is switched to zero and the -th agent is eliminated from the game .we deal with a set of differential equations which are piecewisely continuous . at the moments of time when for any up - to - now - active agent , the equations are switched from one analytical solution to another one . in this sense ,the formalism is equivalent to a coupled map lattice , but the number of equations changes in time .similar approach was applied already in .note however that in our case , the maps are to be integrated numerically .we consider two different neighbourhoods : ( i ) each individual has randomly chosen neighbours , or ( ii ) each individual has nearest geometrical neighbours . in the latter case , the agents form a one - dimensional chain with periodic boundary conditions .note that the formed network of agents is a continuous analogue to the kauffman model , designed for a simulation of genetic systems .we solve numerically and check how the kind of neighbourhood , the number of directly interacting agents and the strengths of interactions influence ( i ) time evolution of the average power ( ii ) and number of active agents after very long times .the simulation is started with active agents ( for ) , each with randomly chosen initial power .the simulation takes time steps , each long , what guarantees the numerical stability .\(a ) + ( b )two subsequent stages of the time evolution can be observed independently of kind of neighbourhood ( fig .[ fig1 ] ) .initially , the average power increases distinctly , but its distribution becomes wide . after some transient time , however , resources are exhausted .richness of some agents falls to zero and they are eliminated .cooperation becomes less effective , what leads to subsequent falls .finally , small percent of agents survive in a steady state .distinct differences are found between the cases of geometrical and random neighbourhood ( fig .[ fig1 ] ) . for the case of random neighbours ,usually only one agent survives , .this simplifies eq .to what gives subsequent power decrease of the last agent as .if two or more cooperating agents survive , we are faced with a set of nonlinear equations .analytically , a simplified case can be considered , when , .then , the average power tends to a positive stable fixed point .numerical results suggest , that this is the rule also in the general case ( fig . [ fig1 ] ) .the marked difference between the results for random and geometrical neighborhoods is an illustration of the old truth _ do ut des_. in other words , it is better to help friends which can reward than to people randomly selected in the street .a group of agents feeding each other can survive , if they spend resources moderately enough .a kind of an equilibrium with a given environment seems to be possible for the geometrical neighborhood , as long as the verhulst term is compensated by the remaining terms in eq . ..the percentage of the society s successes for geometrical neighbourhood and various sets of . , , , , .[ cols="^,^,^,^,^,^,^,^,^,^,^",options="header " , ] the results given above refer to the case of the geometrical neighborhood .let us define as a success of a society the case when more than one cooperating agents survive .then , tab .[ tab1 ] gives the percentage of successes for various sets of .it is astonishing ( at least for us ) that the roles of these two terms ( first and third ) in the r.h.s . of eq .differ so much .actually , both these terms are designed to increase the power of a given agent .the difference is only that the third term is a kind of local optimization , while the first one is automatic .it seems that this kind of dynamic reaction of a given agent is particularly relevant at the border of `` death '' ( or `` bankruptcy '' or so ) , when is close to zero .average power in asymptotic time , presented in tab .[ tab2 ] , does not show this effect .there , both terms act in the same way and can be mutually replaced to get approximately the same result .we are somewhat surprised with the fury of crisis , which can be observed in fig .simultaneously , distinct but continuous progress of the average power is substituted by wild oscillations , and the number of agents is strongly reduced .the effect arises abruptly , without any preceding warnings in the curve shape .soon , unavoidable elimination of almost all agents is observed for all applied sets of input parameters .we imagine the model as a parabolic description of the process of breaking of bonds and a destruction of a complex system .we feel that in this text , full of analogies , we are at the border of abuse of language .we apologize for that .however , we have found that it is particularly difficult to describe a transient process in precise terms of statistical physics , most of them designed for stationary processes .our process could be treated as stationary , if we allow the population of agents to be slowly reproduced , maybe with retaining some fruitful information , and the whole story is repeated many times .this kind of investigation could link to a penna - like model and to some kind of self - organization of the structure and strength of bonds between agents .the authors are grateful to dietrich stauffer for criticism on manuscript and paying our attention to term `` bankruptcy '' and ref .the numerical calculations were carried out in ack - cyfronet - agh .10 j. hofbauer and k. sigmund , _ evolutionary games and population dynamics _ , cambridge up , cambridge 1998 .d. stauffer , int .c * 11 * ( 2000 ) 1081 .j. liebreich , int . j. modc * 10 * ( 1999 ) 1317 .a. aleksiejuk , a. hoyst , _ a simple model of bank bankruptcies _ , physica a ( 2001 ) in print .k. kaneko , physica d * 103 * ( 1997 ) 505 .s. a. kauffman , _ the origins of order _ , oxford up , oxford 1993. s. moss de olivieira , p. m. c. de olivieira and d. stauffer , _ evolution , money , war and computers _ ,teubner , stuttgart - leipzig 1999 .
a network of agents cooperate on a given area . time evolution of their power is described within a set of nonlinear equations . the limitation of resources is introduced via the verhulst term , equivalent to a global coupling . each agent is fed by some other agents from his neighborhood . two subsequent stages of the time evolution can be observed . initially , the richness of everybody increases distinctly , but its distribution becomes wide . after some transient time , however , resources are exhausted . richness of some agents falls to zero and they are eliminated . cooperation becomes less effective , what leads to subsequent falls . finally , small percent of agents survive in a steady state . we investigate , how the cooperation influences the rate of surviving . * cooperation and surviving with limited resources * k. malarz and k. kuakowski _ department of theoretical and computational physics , faculty of physics and nuclear techniques , university of mining and metallurgy ( agh ) + al . mickiewicza 30 , pl-30059 krakw , poland _ + e - mail : .edu.pl , .ftj.agh.edu.pl
in data hiding , a very old field named steganography is used since the antiquity . as defined by cox _ et al . _ , steganography denotes _ the practice of undetectability altering a work to embed a message _ " . in the classical problem of the prisoners , alice and bob are in prison and try to escape .they can exchange documents , but these documents are controlled by an active warden named wendy .cox defines the warden as active when _ she intentionally modifies the content sent by alice prior to receipt by bob _ " .these modifications can slightly modify the content and degrade the hidden information . in this work ,we consider that all modifications performed by wendy are modeled by an additive white gaussian noise ( awgn ) and we propose to study the limits of such systems . since our specific active warden context is similar to the case of watermarking with awgn channel , we propose to study the capacity according to the shannon definition as the maximum information bits that can be embedded in one sample subject to certain level of the active warden attack ( an awgn attack in this case ) . in sequel , we evaluate the statistical undetectability by the kullback - leibler distance ( kld ) between the probability density functions ( p.d.f . ) of the stego - signal and the cover - signal , since the warden detects the message by comparing the stego - document probability density function with that of the cover - document . in , author used kld to evaluate the security of stego - systems in the context of the passive warden . in this work , cachin s security criterionis not used since the context is different ( active warden context ) .we propose here to base our comparative study on informed data hiding schemes as the scalar costa scheme ( scs ) .one of the major work already proposed on these type of scheme by guillon _ et al . _ experimentally found that scs is statistically detectable due to artifacts in the p.d.f .of the stego - signal . the wayproposed to make it undetectable is the use of a specific compressor on the signal leads to a less flexible scheme .le guelvouit proposed to use trellis - coded quantization ( tcq ) in order to hide the message : the author shows experimentally that the p.d.f . of the stego - signalis not affected by the embedded message .we fully complete this study and also theoretically demonstrate this result .moreover , we propose in this work an evaluation of steganographic performance in an active warden context of the spread transform scalar costa scheme ( st - scs ) , which is often use for robust watermarking .we demonstrate with experiments and analytic formulations the good statistical undetectability level of this system , then we compare its capacity and the compromise between the capacity and the statistical undetectability with other systems .+ let us first list some notational conventions used in this paper .vectors are notes in bold font and sets in black board font .data are written in small letters , and random variables in capital ones ; ] , ] are denoted respectively as , and in this section .if the information bits are equiprobable , then ( see appendix [ appendixa ] ) : } p_s \left ( \frac{x - \alpha u}{1 - \alpha } \right ) \textrm , \label{eq1}\ ] ] where } ] .it is given by the following equation ( in sequel , we do not use the index of the variable for ease of presentation ) : where represents the costa s optimization parameter and a cover - signal is modeled by a realizations set of gaussian random variables , independents and non stationary : , \ldots , s[g]\} ] for the trellis states and we suppose that all these states follow an uniform distribution such as : . in tcq - based stego - system , we substitute the cover - samples by , , the codeword of sub - codebook which corresponds to the state and message - bit .it is given by ) } = ( n + m/2 - i / n ) \delta ] for . by leading on appendix [ appendixa ] , the p.d.f .formulation of tcq stego - signal for a fixed state is : }(x - u_{(n , m , e ) } ) \nonumber \\ & & \times p_{s } \left ( \frac{x - \alpha u_{(n , m , e)}}{1 - \alpha } \right ) \textrm , \end{aligned}\ ] ] and ) p_{e}(\textbf{e}[i ] ) \nonumber \\ & & = \frac 1 { ( 1 - \alpha ) } \sum_{n , m } \frac 1 n \sum_{i = 1}^{n/2 } 1 _ { \left [ - \frac 1 { 2(1 - \alpha ) } , \frac 1 { 2(1 - \alpha ) } \right ] } \left ( x - u_{(n , m , \textbf{e}[i ] ) } \right ) \nonumber \\ & & \times p_{s } \left ( \frac{x - \alpha u_{(n , m , \textbf{e}[i])}}{1 - \alpha } \right ) \textrm , \end{aligned}\ ] ] if the number of states is large and by leading on the properties of the riemann sum , then : } \left ( x - ( n + \frac m 2 - \gamma \right ) \delta ) \nonumber \\ & & \times p_{s}\left ( \frac{x - \alpha \left ( n + \frac m 2 - \gamma \right ) \delta}{1 - \alpha } \right ) \textrm { ~d } \gamma \textrm .\end{aligned}\ ] ] if we replace by its two possible values , i.e. 0 or 1 , and make the following variable change , we obtain : the transformation of the cover - signal is modeled by a realizations set of gaussian random variables , independents and non stationary , i.e. , \ldots , s^{\scriptsize \textrm{st}}[g/\tau ] \} ] and it is modeled by a set of gaussian , independents and non stationary random variables , i.e. , \ldots , t[n]\} ] then since = \pm 1 / \sqrt \tau ] , thus the previous equations becomes now , we compute the p.d.f of the codeword conditionally to , , and the message : where represents the kronecker symbol .therefore in this work , we consider as a random variable independent of and .therefore and now , we make the following variable change : then , we obtain is a random variable which the realizations take just two values , and since is also considered as equiprobable , the marginalization over this two variables and over and gives : chen , b. and wornell , g. w. : quantization index modulation : a class of provably good methods for digital watermarking and information embedding , ieee trans .information theory , vol .47 , pp .14231443 , may 2001 .
several authors have studied stego - systems based on costa scheme , but just a few ones gave both theoretical and experimental justifications of these schemes performance in an active warden context . we provide in this paper a steganographic and comparative study of three informed stego - systems in active warden context : scalar costa scheme , trellis - coded quantization and spread transform scalar costa scheme . by leading on analytical formulations and on experimental evaluations , we show the advantages and limits of each scheme in term of statistical undetectability and capacity in the case of active warden . such as the undetectability is given by the distance between the stego - signal and the cover distance . it is measured by the kullback - leibler distance .
synchronous activity has been observed in many regions of the brain and has been implicated as a correlate of behavior and cognition . in the hippocampal formation , where such activity has been studied most thoroughly , neurons discharge in several behaviorally important synchronous rhythms . among these patternsare the theta ( 4 - 12 hz ) and gamma ( hz ) rhythms , which appear as nested rhythms under conditions of active exploration and paradoxical sleep , as well as hippocampal sharp waves ( hz ) , which occur along with embedded fast ripples ( hz ) under conditions of rest and slow wave sleep . here, we investigate some mechanisms responsible for generating synchronous oscillations throughout the physiologically relevant range of frequencies ( 10 - 200 hz ) .two crucial results point to the importance of inhibitory interneurons in generating synchronous rhythms in the hippocampal formation .first , it has been shown in intact animals that interneurons fire robustly and synchronously in both the theta - gamma state and in the sharp wave - ripple state .second , _ in vitro _experiments have demonstrated that a functional network containing interneurons alone can support synchronous gamma activity .these and other experimental results have spurred both analytic and numerical studies of synchrony among neurons . among the principal conclusions of such studiesare that stable synchrony is supported by inhibition that is slow compared with neuronal firing rates ; and that firing rate decays linearly , eventually saturating , as a function of the decay time constant of inhibition ( ) .when the synaptic coupling is extremely fast , the coupling tends to push the neurons towards anti - synchrony .synchronous oscillations generated _ in vivo _ are almost certainly the product of interactions among neurons with some ( unknown ) degree of heterogeneity in excitatory drive and intrinsic excitability .much of the earlier work in the area has not explored the effects of heterogeneity in intrinsic spike rates .et al._(1993 ) considered a network of integrate - and - fire oscillators with heterogeneous external drive and all - to - all _ excitatory _ coupling .they found that for an infinite number of oscillators , those with an external drive below a critical value would be synchronized and those above the critical value would be asynchronous .this co - existence around the critical value persisted in the limit of vanishing heterogeneity .golomb and rinzel ( 1993 ) considered a heterogeneous network of all - to - all coupled inhibitory bursting neurons and found regimes of synchronous , anti - synchronous and asynchronous behavior when the width of the heterogeneity was changed .they considered a parameter regime that was synchronous for small heterogeneity .wang and buzski ( 1996 ) considered a hippocampal interneuron network with heterogeneity in the external drive and network connectivity .they found numerically that for a physiologically plausible parameters , coherent activity is only possible in the gamma range of frequencies .our purpose here is to understand more fully the implications of small levels of heterogeneity for the degradation of synchrony in networks of inhibitory fast spiking neurons and the mechanisms by which this degradation occurs . to this end, we have begun a coordinated set of analytic and numerical studies of the problem . in this paper, we numerically analyze a network of interneurons applicable to the ca1 region of the hippocampus .we consider slow inhibition and heterogeneity in the external drive .we find that small amounts of heterogeneity in the external drive can greatly reduce coherence .in addition , we find that coherence can be reduced in two qualitatively different ways depending on the parameters either by a transition to _ asynchrony _ where the cells fire independently of each other , or through _ suppression _ where faster cells suppress slower cells .the reaction of a network to heterogeneity is shown in the paper to be correlated with the dependence of firing frequency on the time constant of synaptic decay .we find in self - inhibiting cells or synchronous networks that this dependence divides into two asymptotic regimes . in the first ( the tonic - inhibition or _ tonic _regime ) , inhibition acts as if it were steady - state and only weakly affects discharge frequency . in the second ( the phasic - inhibition or _phasic _ regime ) , time - varying inhibition firmly controls discharge frequency . there is a gradual crossover between these regimes .the presence of a neuron or network in the tonic or phasic regime can most easily be determined by examining the ratio of the synaptic decay time constant to discharge period ( ) .( discharge period can be obtained from the full network or from a reduced model including only a single cell with self - inhibition . ) is large ( for our parameters ) and varies linearly with in the tonic regime . is small ( ) and only logarithmically dependent on in the phasic regime .however , if is _ too _ small ( ) , the phasic regime is departed and anti - synchrony is possible. networks of weakly heterogeneous ( less than 5% ) cells generally exhibit asynchrony ( defined here as the state of phase dispersion ) in the tonic regime . in the phasic regime, cells generally exhibit a form of locking , including synchrony , harmonic locking ( locking at rational ratios ) , and suppression .these results can be demonstrated analytically using a reduced model with mutual and self - inhibition .we conclude that mild heterogeneity in inhibitory networks adds effects that are not accounted for in previous analyses , but that are tractable under our current framework .in particular , we show that the prediction that slow inhibition leads to synchrony , made under assumptions of homogeneity , must be modified in the presence of mild heterogeneity .thus , the new framework provides a context for understanding previous simulations .in particular , it explains the mechanisms underlying asynchrony ( phase dispersion ) with slow decay of inhibition .these mechanisms differ from those underlying the loss of synchrony with faster - decaying inhibition .simulations were carried out using single - compartment neurons with inhibitory synapses obeying first - order kinetics .membrane potential in each point neuron obeyed the current balance equation where / , is the applied current , and are the hodgkin - huxley type spike generating currents , is the leak current and is the synaptic current .the fixed parameters used were : ms/ , ms/ , ms/ , mv , mv , mv , mv .these parameters are within physiological ranges and give the high spike rates typical of hippocampal interneurons .the phenomena described here seem largely independent of specific neuronal parameters .the activation variable was assumed fast and substituted with its asymptotic value )^{-1} ] , ) ] , )) ] .the odes were integrated using a fourth - order runge - kutta method .the free parameters were scanned across the following ranges : for applied current , 0 - 10 / ; for , the maximal synaptic conductance per cell , 0 - 2 ms/ ; for the synaptic decay time constant , 5 - 50 ms .as a measure of coherence between pairs of neurons , we generated trains of square pulses from the time domain responses of each of the cells ( fig .[ coho_ex_fig ] ) .each pulse , of height unity , was centered at the time of a spike peak ( resolution = 0.1 ms ) ; the width of the pulse was 20% of the mean firing period of the faster cell in the pair ( 0.2 in fig .[ coho_ex_fig ] ) .we then took the cross - correlation at zero time lag of these pulse trains .this is equivalent to calculating the shared area of the unit - height pulses , as shown in fig .[ coho_ex_fig]d .we took coherence as the sum of these shared areas , divided by the square root of the product of the summed areas of each individual pulse train .for the example shown in fig .[ coho_ex_fig ] , our algorithm gives coherence of 0.35 .our approach differs from the algorithm used by wang and buzski ( 1996 ) , in which trains of unit - height pulses are correlated for a bin width equal to or greater than the neuronal time scale .the difference between the two algorithms can be appreciated by considering the contribution made to the coherence measure by two spikes ( in two separate neurons ) occurring with time difference .the wang and buzski ( 1996 ) algorithm would see these as perfectly coherent if the spikes are in the same time bin and incoherent if they are not .the answer depends on where the bin edges fall , with probability of a coherence `` hit '' falling to zero when the bin width is less than . in their algorithm coherence is a function of the bin width , and averaging across the population of cells ameliorates effects due to the placement of bin edges . in our algorithm ,the two spikes make a contribution to coherence that is continuously distributed between 0 ( 20% of firing period ) and 1 ( ) .although both algorithms give results that depend on the percentage of the firing period considered significant , our measure allows us to examine coherence in small networks with less discretization error .this change is important here specifically because we analyze small networks that phase - lock with a short but measurable phase difference .we mapped coherence vs. , , and for networks of 2 , 10 , and 100 cells with all - to - all inhibitory coupling . in networks with , coherence is plotted in the maps . in larger networks ,the plots show the average of the coherence measure taken for all pairs of neurons .we first consider the firing characteristics of a single self - inhibited neuron or , equivalently , a network of identical , synchronized , mutually inhibitory neurons .these simulations validate predictions from analytic work on simpler models and determine the ranges of the phasic and tonic regimes in parameter space .firing frequency of the single neuron was tracked over the parameter space of , , and .figure [ 1cell_fig]a shows sample time - domain traces for three values of ( 0.4 , 1.6 and 9.0 / ) . like mammalian interneurons , the modeled system of differential equations produces action potentials at rates up to 250 hz .figure [ 1cell_fig]b shows discharge frequency as a function of , for several values of . for large values of ( lower traces ) , this curve is roughly linear . for smaller values ( upper traces ) ,discharge frequency rises along a somewhat parabolic trajectory . for negative values of ,the self - inhibited neuron can fire at arbitrarily low frequencies ( data not shown ) , indicative of a saddle - node bifurcation and synchrony through slow inhibition . in fig .[ 1cell_fig]c we show discharge frequency versus for several values of , with fixed .the dependence of the frequency on for the lower two traces is similar to what was observed in the full network and _ in vitro _ by whittington _et al . _ ( 1995 ) .the phasic and tonic regimes are clearly illustrated in fig .[ 1cell_fig]d , in which the ratio is plotted versus for various values of . for large ( top traces ), is large and linearly related to , indicative of the tonic regime .in contrast , for small ( bottom trace ) , is small and depends only weakly on , indicative of the phasic regime .for our model and level of heterogeneity , parameter sets that give are in the phasic regime ; sets that give are in the tonic regime .presence in either the phasic or tonic regime is dependent on parameters other than .generally , the tonic regime is characterized by strong applied current and a relatively weak synapse so that the firing period is much faster than the synaptic decay time .the phasic regime occurs when either the applied current is weak and/or the synapse is strong so that the firing period is locked to the decay time .we simulated networks of two mutually inhibitory cells with self - inhibition .we include self - inhibition because it better mimics the behavior of a large network . in these and all other network simulations , mutual and self - inhibition are of equal weight . in networks of two interneurons with identical properties but different initial conditions ,the cells quickly synchronize ( phase - lock with zero phase difference ) over the entire examined range of , , and ( data not shown ) .slow - firing cells tend to synchronize more quickly than fast - firing cells , but the exact delay before synchronization depends on initial conditions and was not examined systematically . anti - synchrony is not stable in the parameter regime we considered , but could be with very small values of .when the input to each neuron is made mildly heterogeneous ( intrinsic spike rates 5% different ) , a more complex picture emerges . under the conditions of mild heterogeneity modeled here , but not necessarily under conditions of greater heterogeneity , the behavior of the two - cell network falls into one of four qualitative states , as exemplified by the traces of membrane potential and inhibitory conductance vs. time in fig .[ domains_fig ] . for small , large , and large conditions associated with the tonic regime the phasic component of synaptic inhibition received by each cell is small ( fig .[ domains_fig]a ) .the neurons influence each other s firing frequencies , but firing times are independent .we refer to this phase - dispersed state as the _asynchronous state_. as the phasic component of inhibition is increased , the phasic regime is approached . within the phasic regimelie three qualitative states . for appropriate choices of the level of inhibition ,the two - cell network enters a phase - locked state with a non - zero phase difference ( fig .[ domains_fig]b ) .we will continue to use the term synchrony to refer to this near - synchronous regime . for this model , heterogeneity of some sort( in this case , heterogeneity of intrinsic firing frequencies ) is a necessary and sufficient condition for near , as opposed to pure , synchrony .the size of the phase difference depends on the parameters chosen . with further increases in the level of inhibition ,the faster cell begins to suppress its slower partner , leading to what we term _ harmonic locking _[ domains_fig]c ) . in this example, cells fire in a 4:3 ratio , and exert temporally complex effects on each other during the course of one cycle ( 50 ms ) .finally , with enough inhibition , the faster neuron inhibits its slower counterpart totally , in what we term _ suppression _ ( fig .[ domains_fig]d ) . in suppression ,the sub - threshold dynamics of membrane potential in the suppressed cell are exactly phase locked to those of the faster cell .this exact relationship holds because our simulations do not include a synaptic delay term . without self - inhibition, this harmonic - locking regime is very small and not seen in the analogous parameter space ( data not shown ) .our heuristic explanation for this difference is as follows . without self - inhibition ,once the slower neuron is suppressed , the instantaneous preferred frequencies of the two cells diverge .the faster cell is uninhibited and , by firing faster , adds more inhibition to the slower cell , making it more difficult for the slower cell to escape . with self - inhibition ,each of the cells in the two - cell network receives an identical synaptic signal , effectively making the two cells more homogeneous .the added homogeneity increases the size of the region in which harmonic locking occurs at relatively small locking ratios . in order to observe network behavior over a large parameter range, we used the relatively simple measure of firing coherence ( see methods ) .a given level of coherence does not uniquely determine the qualitative behavior of the network ( asynchronous , synchronous , harmonic , or suppressed ) .however , the structures of coherence maps are stereotyped , and coherence maps can be correlated to the four qualitative network states .figures [ coho_fig]a - b show three dimensional plots of coherence in a two - cell network , plotted versus and for low ( and / ) and high ( and / ) applied currents .( the gray scale , which does not relate to coherence , is discussed below . ) even though the differences in intrinsic ( uncoupled ) firing frequencies for the two cells are small ( in each case ) , coherence is high and smoothly varying , corresponding to synchrony , only over a small region of parameter space .the extent of the synchronous region increases as decreases . increasing the heterogeneity reduces the size of the synchronous region . for differences greater than a few percent in the intrinsic ( uncoupled ) frequencies , the synchronous regionwas dramatically reduced in size ( data not shown ) . for a given ,synchrony is broken in two distinct ways if is either too small or too large . for large , large , and ( especially ) small ,the phasic coupling between the two cells is weak and they fire asynchronously ( i.e. , with dispersed phase ) . in this state , which is particularly large on the left side of fig .[ coho_fig]b , coherence has a value of about 0.2 , corresponding to the expected value of our coherence measure with `` memory '' equal to 20% of the spiking period . for large , high levels of coherenceare lost when the faster cell begins to suppress the slower cell , resulting in harmonic spiking .the particular pattern of harmonic spiking can change dramatically with small changes in parameters , resulting in the jagged coherence regions seen in figs .[ coho_fig]a - b .again , the harmonic region is particularly noticeable with large , as in fig .[ coho_fig]b . eventually , with large enough , the full suppression state can take hold , and coherence plummets to give a very flat region of coherence at a value of 0 .this state , favored by large and large , occupies a large region on the right side of fig .[ coho_fig]a .we argued in the discussion of fig .[ domains_fig ] that the network s presence in the asynchronous state is associated with the tonic regime , and that the transition from asynchrony to locking is associated with the transition from the tonic regime to the phasic regime . to demonstrate this effect ,we have gray - scale - coded the coherence maps of fig .[ coho_fig ] according to the value of obtained from single , self - inhibited cells with the same values of and total inhibition and taken as the average of the range seen in the heterogeneous population .the single - cell value of is useful as an indicator of the qualitative state of all the cells in the network because all the cells that are not suppressed fire at similar frequencies .this result is demonstrated by fig .[ compare_n ] , which shows plots of for four conditions : the n = 1 case ( solid lines ) , the n = 2 case with differences in intrinsic rates of around 4% ( dashed lines ) and 2% ( dashed - and - dotted lines ) ; and the n = 10 case with maximal heterogeneity of around 4% ( dotted lines ) . in all cases with more than one cell , a pair of traces corresponding to the fastest and slowest cells of the simulations are shown . in all cases ,the traces follow similar trajectories until the slowest cell is suppressed ( indicated by an abrupt end of the lower branchbefore the rightmost point is reached ) .this similarity in ( and hence ) for all unsuppressed cells is seen in both the phasic ( fig .[ compare_n]c ) and tonic ( fig .[ compare_n]d , right side ) regimes . returning to fig .[ coho_fig ] , the value of as a predictor of transitions in qualitative state and hence coherence implies that we should see transitions from asynchrony when drops below . as figs .[ coho_fig]a - b show , this approximate relationship does hold .furthermore , factors that change ( e.g. , changing ; cf . figs . [ domains_fig]a and [ domains_fig]b )have predictable effects on the extent of the asynchronous state in ( )-space .figures [ coho_fig]c - d show similar results with less heterogeneity ( / for panel c ; / for panel d ; these values approximate the mean one standard deviation for uniform distributions with limits as in figs .[ coho_fig]a - b ) . in these cases ,the same qualitative coherence map is evident , with a somewhat larger region of coherence .the qualitative coherence regions correspond to the same qualitative states from fig .[ domains_fig ]. we also simulated all - to - all connected networks of 10 and 100 heterogeneous inhibitory neurons and found qualitatively similar results .figures [ coho_fig]e - f show the coherence plots over the same parameter space as figs .[ coho_fig]a - b for a network of 10 heterogeneous cells .the level of inhibition per synapse , , scales with to keep the level of inhibition per postsynaptic cell , , constant .for the ten - cell case , applied current is uniformly distributed through the same ranges as in panels a - b ( [ 1.6 , 1.78 ] for panel e ; [ 9.0 9.9 ] for panel f ) .again , there are four qualitative states : an asynchronous state for small , more prevalent with higher ; a near synchronous state ; a harmonic state ; and a suppressed state . for the 10 cell network ,the transition to suppression is smoother than in the two - cell case .cells fall out of the rhythm to suppression one at a time , leading to a relatively smooth drop in coherence . at the highest values of , coherence has not yet dropped to zero because some cells are still able to synchronize with the fastest neuron of the network . in the harmonic state ,examination of time - domain traces ( data not shown ) reveals harmonic patterns , with a cluster of cells in synchrony while the slower cells drop in and out of the population rhythm .the coherent region for the ten - cell network is larger than in figs .[ coho_fig]a - b . applied currents ( and hence intrinsic frequencies ) of the two neurons in panels a - b are at the limits of the range of applied currents in the ten - cell network , making the effective level of heterogeneity smaller in the ten - cell case . the close agreement between panelsc - d and e - f supports this contention .we also performed a limited number of simulations of a 100-cell network with the same architecture , at parameter values representing orthogonal slices through the 3-dimensional coherence maps .results from these simulations are shown in fig .[ slices_fig ] , along with slices from the coherence maps of fig .[ coho_fig ] . in figs .[ slices_fig]a - b , coherence is plotted vs. for a fixed value of = 15 ms and at two levels of applied current . in figs .[ slices_fig]c - d , coherence is plotted vs. for a fixed value of = 0.5 ms/ .results from the 100-cell ( n = 100 ) and 10-cell ( n = 10 ) cases are quite similar , at both low ( panels a , c ) and high ( panels b , d ) levels of applied current .these results support the argument that the qualitative behavior of the network does not change with n , and thus that predictions based on single - cell analysis and simulations are applicable to moderately heterogeneous networks of arbitrary size .results are shown for both levels of heterogeneity in 2-cell networks .the dashed lines ( n = 2 ) , which are slices through the coherence maps of fig .[ coho_fig]a - b , have lower coherences that reflect the relatively large amounts of heterogeneity in these cases . the dashed - and - dotted lines ( n = 2 * ) show coherence values for slices through figs . [ coho_fig]c - d , with closer intrinsic frequencies chosen to approximate the standard deviations of the appropriate uniform distributions .these slices more nearly match the 10- and 100-cell cases .results from figs .[ compare_n ] and 6 also demonstrate the close relationship between the ratio and coherence ( as well as underlying qualitative states ) .values of from fig .[ compare_n ] are almost invariably associated with one of the locked states .values of , on the other hand , give rise to the asynchronous state , associated in fig .[ slices_fig ] with regions of flat coherence at a value of 0.2 ( e.g. , the leftmost portion of fig . [ slices_fig]b and the rightmost portion of fig . [ slices_fig]d ) .we show that the behavior of the firing frequency of a single self - inhibited cell can give insight into the network frequency and coherence .in particular , the ratio of the synaptic decay constant to the neuronal firing period has rough predictive value in determining whether a mildly heterogeneous network is synchronous or asynchronous .this predictive value only holds with mild heterogeneity , however ; greater heterogeneity leads to a mixture of qualitative states which invalidates our analyses .we also emphasize the importance of even mild heterogeneity in affecting network dynamics .previously , it had been argued that slowly decaying inhibition generally had a synchronizing influence .however , for mildly heterogeneous cells , the relation of the frequency ( or period ) to the synaptic decay time must also be considered . for homogeneous cells ,the synaptic coupling is only required to align the phases in order to obtain synchrony . for mildly heterogeneous cells ,the coupling must both align the phases and entrain the frequencies .the latter is more difficult for the network to achieve .it occurs only when the inhibition is strong enough so that firing period is dominated by the decay time .however , if the inhibition is too strong then the slower cells will never fire .thus , there are two ways to destroy full network synchrony .the first is through effective de - coupling where the cells tend to fire asynchronously .the second is through suppression , in which the neurons with higher intrinsic rates fire in near - synchrony and keep their slower counterparts from firing . between synchrony and suppression harmonic lockingis also possible .this occurs when the suppression of the slower cell is temporary but lasts longer than the period of the faster cell .we should note that anti - synchrony , not seen in the parameter regimes presented here , can become stable with very fast synapses ( i.e. , ) . for even mildly heterogeneous cells , synchronyin which all inhibitory cells participate is possible only over a small region of parameter space that decreases as the heterogeneity is increased .the region where synchrony occurs in a large network of known ( mild ) heterogeneity and connectivity can be approximated from a two - cell network .the frequency of firing and conditions allowing synchrony can be estimated analytically from a reduced model neuron with self - inhibition as in large networks , the frequency in single cells depends on the applied current , the synaptic strength and the synaptic decay time . in the synchronous region ,the firing period depends linearly on the decay time and logarithmically on the other parameters so that frequency will depend directly on the decay rate .however , the contribution from the logarithmic factor can be fairly large and thus must be calculated explicitly .this can be estimated analytically from the reduced model , or from simple simulations of a single , self - inhibiting cell ( see fig . [ 1cell_fig ] ) .the result that the value of from single - cell simulations has predictive value for the qualitative state and coherence of a network of arbitrary size is intriguing and potentially useful , because it points the way to determining the qualitative and quantitative behavior of a neuronal network based on simple behavior that can be studied numerically or even analytically .however , the predictive capabilities of this index should not be overestimated .a careful examination of fig . [ coho_fig ] shows that the mapping between and asynchrony is not precise .the value of at which the transition will occur is dependent on many factors , including the level of heterogeneity and , in all likelihood , the level and form of connectivity in the network .the value of alone is not sufficient to determine the point of transition from synchrony to harmonic locking and suppression , even in a model of known heterogeneity and architecture .making this determination requires knowledge of and in addition to .studies of the 2-cell network were successful in elucidating the qualitative states of the larger circuit , though the exact form of transitions from asynchrony to synchrony and synchrony to suppression is different in detail for our simulations of the 2-cell and n - cell cases . in general, the behavior of the 2-cell network matches that of the n - cell circuit better in the asynchronous state , associated with the tonic regime , than in the harmonic and suppression states , associated with the phasic regime .this result is expected from our theoretical framework since the tonic regime is defined as the regime in which only the tonic level of inhibition is important . since we normalized the synaptic strength by n , the net amount of inhibition is independent of the network size .thus , we take this result as additional evidence that our hypothesized mechanisms of loss of coherence are correct .our numerical results are similar to those of wang and buzski ( 1996 ) , but our explanations differ considerably . in heterogeneous networks, they also saw a decline in coherence with both low and high firing rates .they attributed the decline in synchrony for low rates to two factors .first , they point out that cells are more sensitive at low firing rates than at higher rates to changes in applied current , a source of heterogeneity in both studies .this point is correct , but in our work we controlled for this factor , using smaller percent differences in small currents than in large currents to achieve similar percent differences in intrinsic firing rates , and we still saw a drop - off in coherence at low rates .second , wang and buzski ( 1996 ) cite what they call a `` dynamical '' effect , in which inhibition is fast enough to destabilize the synchronous state .previous work shows that the outcome of such dynamical effects for _ homogeneous _ networks is anti - synchrony . in our parameter regime ,the loss of coherence in _ heterogeneous _ networks at low firing rates ( i.e. , with small ) is associated with the phasic regime and is due to suppression of firing in slower cells .wang and buzski ( 1996 ) make the phenomenological argument that the loss of synchrony at high firing rates is related to a need for greater density of synaptic connectivity .we considered all - to - all connectivity and found that loss of coherence associated with high firing rates ( tonic regime ) is caused by a loss of too much of the phasic component of inhibition .furthermore , we argue that one can approximate the parameters for which this loss of coherence occurs by analyzing the single , self - inhibitory cell. it should be possible to generalize these results and arguments to the case with less than all - to - all coupling .it has been suggested that the selection of the network frequency _ in vivo _ is determined by the tonic excitation and the parameters regulating the synaptic coupling .our results support this hypothesis .however , we have demonstrated that with heterogeneous cells , synchrony may not be possible at all frequencies . in particular , a network of this kind seems unlikely to support synchronous firing at 200 hz , a frequency that seems too fast to be synchronized by gaba receptors with ms ( and ) .our framework implies that this result , which has been seen in simulations before , holds in general for heterogeneous cells in the tonic regime .our results emphasize the difficulty of generating synchronous oscillations in interneuronal networks over a large range of frequencies , such as in the transition from the gamma / theta mode to the sharp wave / fast ripples mode . at gamma frequencies ,the factor should be less than 1 with typical values of .thus , full synchrony at gamma frequencies is possible but requires careful regulation of the system to prevent suppressive effects .the question of whether or not the suppression we see is incompatible with physiological data can not be answered , because it is extremely difficult to estimate the number of interneurons participating in the rhythm .we believe that this issue can be explored , and our model tested , by examining the power of the gamma field potential in a brain slice as is modified by pentobarbital .our model predicts that the power in this signal should decrease as rises and suppression becomes more evident .a negative result in these experiments would indicate that our model is missing a fundamental element .one such element is intrinsic or synaptic noise , which can act to release neurons from suppression ( white , unpublished observations ) .the more difficult goal for our model to achieve is that of firing synchronously at ripple ( 200 hz ) frequencies , as has been reported in the behaving animal .one or more of several conceivable explanations may underlie this apparent robustness in hippocampal function at high frequencies .first , it is possible , but unlikely , that heterogeneity in the intrinsic firing frequencies of interneurons is very low ( % ) .second , the operant value of may be lower than we believe ; a value of 5 ms would conceivably allow synchrony at 200 hz with levels of heterogeneity of around 5% .third , each interneuron may fire not at 200 hz , but rather at a lower frequency of , say , 100 hz , during sharp waves . under this explanation ,the 200-hz ripple would be generated by clusters of two or more populations of neurons spiking independently .finally , some factor(s ) not considered here may enhance synchrony at high frequencies .gap junction - mediated electrical coupling among interneurons , for which some evidence exists in the hippocampal region ca1 , is perhaps the most likely such factor .we thank m. camperi for assistance in writing code , and s. epstein , o. jensen , c. linster , and f. nadim for helpful discussions .b. ermentrout , j. rinzel , and r. traub provided valuable feedback on earlier versions of the manuscript .this work was supported by grants from the national science foundation ( dms-9631755 to c.c ., n.k . and j.w .) , the national institutes of health ( mh47150 to n.k . ; 1r29ns34425 to j.w . ), and the whitaker foundation ( to j.w . ) katsumaru h , kosaka t , heizman cw , and hama k. ( 1988 ) gap - junctions on gabaergic neurons containing the calcium - binding protein parvalbumin in the rat hippocampus ( ca1 regions ) .brain res . _ 72:363 - 370 .ylinen a , bragin a , ndasdy z , jand g , szab i , sik a , and buzski g. ( 1995 ) sharp wave - associated high - frequency oscillation ( 200 hz ) in the intact hippocampus : network and intracellular mechanisms ._ j. neurosci ._ 15:30 - 46 .
we study some mechanisms responsible for synchronous oscillations and loss of synchrony at physiologically relevant frequencies ( 10 - 200 hz ) in a network of heterogeneous inhibitory neurons . we focus on the factors that determine the level of synchrony and frequency of the network response , as well as the effects of mild heterogeneity on network dynamics . with mild heterogeneity , synchrony is never perfect and is relatively fragile . in addition , the effects of inhibition are more complex in mildly heterogeneous networks than in homogeneous ones . in the former , synchrony is broken in two distinct ways , depending on the ratio of the synaptic decay time to the period of repetitive action potentials ( ) , where can be determined either from the network or from a single , self - inhibiting neuron . with , corresponding to large applied current , small synaptic strength or large synaptic decay time , the effects of inhibition are largely tonic and heterogeneous neurons spike relatively independently . with , synchrony breaks when faster cells begin to suppress their less excitable neighbors ; cells that fire remain nearly synchronous . we show numerically that the behavior of mildly heterogeneous networks can be related to the behavior of single , self - inhibiting cells , which can be studied analytically .
the small magellanic cloud ( smc ) is an irregular dwarf galaxy .although the smc is believed to be gravitationally bound to the milky way , recent studies suggest that this may not be true .the smc is relatively metal poor and very gas - rich , and represents an ideal galaxy to study the formation of stars in low - metalicity environments .its proximity to the large magellanic cloud ( lmc ) and milky way has resulted in a turbulent history .recent evidence points to a close encounter with the lmc some 1.5 gyr ago which deformed the smc , creating a long , thin filament of hi called the magellanic stream .the interaction between the smc and lmc is still dynamic , with observations showing that metal poor star clusters with ages 200 myr and a metalicity ratio [ fe / h ] -0.6 within the lmc originated from the infalling smc gas .statistical techniques are indispensable while studying turbulence in the interstellar medium ( ism ) . images of neutral hydrogen ( hi ) distribution are frequently used as in most cases it is possible to ignore self - absorption .hi also occupies a large portion of the galactic disc ( roughly a 20% filling factor ) and its movements should reflect large - scale turbulence .furthermore , the prevalence of hi means that it can be studied not only in our galaxy but in nearby galaxies as well .several studies have performed statistical analyses of the hi distribution in the smc .these studies determined the spatial power spectrum of the hi intensity and employed the velocity channel analysis ( vca ) technique ( lazarian & pogosyan 2000 ) to reveal shallower - than - kolmogorov velocity and density spectra .however , power spectra , while being informative about the distribution of the energy with scale , are not sensitive to gas topology . for instance , hi surveys of the smc have shown numerous filamentary structures and shells of expanding gas .these surveys have detected 501 shells dispersed throughout the smc ( see fig .[ shells ] ) , six of which have radii 350pc and are large enough to be classified as supergiant shells ( sgss ) .many of these shells were created by massive associations of ob stars , but some show no spatial correlation to any young stellar population. these ` orphan ' shells may be a result of gamma ray bursts or collisions between high velocity clouds and the smc .alternatively , they could be produced by the ism turbulence .this calls for employing other techniques for the statistical studies of smc .recent years have been marked by an increased interest to statistical studies of astrophysical turbulence ( see lazarian 2004 , for a review ) .in particular , two techniques velocity channel analysis ( vca ) and velocity coordinate spectrum ( vcs ) which are capable of studying power spectra of velocity and density have been developed ( lazarian & pogosyan 2000 , 2004 , 2006 ) .however , the topology of the gas distribution can not be described by the power spectrum . on the contrary , genus analysis , with its ability to accurately describe and quantifythe topology , is a promising synergetic tool in the quest to better understand turbulence .genus statistics was developed to study the topology of the universe and distribution of galaxies in three and two ( coles , 1986 , 1991 , , plionis et al .1992 , davis & coles , 1993 , coles et al . 1993 ) dimensions .subsequent projects have used genus statistics to study the topology of the temperature variations within the cosmic microwave background and have also been applied to a systematic study of the variations of the two - dimensional genus with mhd simulations .the use of genus statistics for the study of hi was first discussed in , and subsequent studies presented the first genus curves for the smc .a recent paper by kim & park ( 2007 ) provided a more thorough study of the topology of the hi peak brightness temperature distribution in the lmc .the goal of the present paper is to extend the technique in lazarian ( 2004 ) and kim & park ( 2007 ) by providing a quantitative measure of the uncertainties involved in the genus studies .we then apply the technique on the smc hi data set to quantitatively describe the topology of the hi distribution .kowal et al .( 2007 ) have shown that genus statistics of a 3d density distribution from synthetic observations obtained with mhd simulations agrees with the genus results for the 2d integrated column density of the same data set .therefore , genus statistics complements the power spectrum analyses in providing insights into the physical processes that shape the ism .the structure of this paper is organized as follows . 2 discusses our approach to the analysis of data . 3 summarizes the observational procedure and subsequent data analysis of the hi column density map .section 4 is an analysis of the cropped regions within the smc , focusing on the genus shift and its topological implications .section 5 discusses the results and posits astrophysical connections between the genus analysis and the smc . in cosmological studies the distributions in question , e.g. the distribution of cmb intensityare nearly gaussian and genus is used to study small deviations from the gaussian . dealing with the ism , in particularly , as in this paper, with the distribution of column densities of smc , one can not expect deviations from symmetry to be small , a priori .genus is a quantitative measure of topology . it can characterize both 3d and 2d distributions .the two - dimensional genus can be represented as ( coles 1988 , mellot et al .1989 ) : + where low- and high - density regions are selected with respect to a given density threshold . as a result , for a given 2d intensity map ,a curve corresponding to different thresholds emerge ( see fig .1 ) . for instance, a uniform circle would have a genus of 0 ( one contiguous region of high density , i.e. `` island '' and one contiguous region of low density , i.e. `` hole '' ) while a ring ( a cd for example ) would have a genus of -1 ( one contiguous region of high density and two contiguous regions of low density ) .two separate circles , one the other hand , correspond to the genus of 1 ( one `` hole '' and two `` islands '' ) , three separate circles to the genus of 2 etc . using the language of richard gott, one can say that genus can distinguish between the meatball and swiss cheese topology .furthermore , the genus can be represented mathematically as an integral using the gauss - bonnet theorem . in more specific terms for the 2d case we have : where is an observed intensity, is the principal radius of curvature and the integral follows a set of contours of the surface at given , and is a normal vector , pointing outside of a contour .much like in eq .( 1 ) , a contour enclosing a high - density region will give a positive contribution , while a contour enclosing a low - density one will give a negative contribution .essentially , at a given threshold value , the genus value is the difference between the number of regions with a density higher than and those with a density lower than .the threshold values in eq .( 2 ) are selected so that they represent area fractions . for a gaussian fieldthey are defined as : raising the threshold level from the mean value would cause the low - density regions to merge together , causing the genus to become positive , reach its maximum and begin to decrease to zero while positive regions begin to disappear with larger .similarly , lowering the threshold level from the mean value would cause the high - density regions to coalesce , resulting in a negative genus .more importantly however , the genus curve for a random gaussian distribution is known ( coles 1988 ) : this particular form of the genus curve characterizes a gaussian random field , whatever its power spectrum is .this is an extremely important point of the genus analysis , because it allows us to separate topology effects from the ones caused by the power spectrum behavior .we expect , that the sign of the genus curve at the mean intensity level does describe the field topology .the positive genus will represent a clump - dominated field , while the negative one should mean the domination of holes .however it is more convenient to work with the zero of the genus curve , because it can be naturally normalized to the field variance . in this case , for the intensity with the subtracted mean value the negative corresponds to the clumpy topology , and the positive indicates the `` swiss cheese '' one .an example of the genus curve is presented in fig .[ sample_clumps ] .that is an example of clumpy topology , as .an additional information on the distribution topology can be obtained by comparing the genus curve for the given distribution with the one for a gaussian distribution but the same dispersion ( see [ gauss ] ) .the points of maximum and minimum of the genus curve correspond to percolation of the distribution ( see colombi et al .2003 ) . the slower fall at large thresholds of the observed genus compared to the genus of the gaussian distribution indicates that the islands are more discrete and pronounced than for the gaussian distribution .as a gaussian field always has a neutral topology , fitting of the gaussian genus for estimation of can not be an optimal choice .as we do not rely on a particular field statistics , we need a robust method for estimation of for any form of a genus curve . in this paperwe use fitting of a polynomial between the global extrema of a genus curve with additional condition of zero derivative at the ends of such interval .practical calculations show , that the 5-th power polynomial is flexible enough to represent varying shape of non - gaussian genera , and always has a single zero , which we interpret as an estimation of .the higher polynomials tend to oscillate when applied to a noisy genus .it can be shown on a simple example , that large - scale trends , even linear , can significantly distort a genus curve , changing its shape and making it noisier .such large - scale gradients usually do nt carry any topological information and should be removed before estimation of .here we have options like subtracting of a polynomial background , or fourier filtering of low harmonics in the whole map or its particular region .another possible source of contamination is presence of non - abundant compact features , with the amplitude hign enough to affect the mean value .such features should be either removed from the map , or weakened by reducing of the image contrast . taking the median value instead ofthe mean one when calculating is also an option .on the other hand , the presence of white gaussian receiver noise would not change the topology , as it corresponds to completely symmetric genus .we can substantiate this statement as follows .let us consider some small region near the intersection of the plane at the level nu and the map surface .if the map has the positive curvature in the direcion of gradient , adding of such noise shifts the genus count to the positive direction . if the curvature is negative , the shift is negative . on the other hand, the mean curvature at the mean level will be positive for clumps and negative for holes , which means that the genus count at this level will be shifted up for clumps and down for holes , i.e. it would not change its sign .this means , that the topology type can not be changed by adding of such noise. the analysis would be incomplete without estimation of variance of .finding of the variance differs our present study from the earlier ones ( lazarian 2004 , park & kim 2007 ) . following suggestions by a pioneer ofthe genus analysis peter coles , we generated for each map a set of images with randomly shifted phases of individual harmonics .this procedure causes the field to take gaussian statistics , and therefore zero , but allows us to effectively estimate its variance . in more detail ,we take fft of the region being studied and assign the phase of each harmonic to a uniformely distributed in $ ] random variable , keeping hermitian conjugacy of the fourier image .after inverse fft we calculate the respective . after repeating this procedure severel timeswe calculate the variance of . in our case10 times appeared to be enough for obtaining of a statistically relevant variance .the hi column density image used in this study is a composite obtained with the australia telescope compact array ( atca ) and the parkes telescope in australia ( fig .[ smc_atca ] ) .atca , a radio interferometer , was used to observe 320 overlapping regions containing the smc .these data were combined with observations from the 64 m parkes radio telescope which observed a 4.5.5 region centered on ra 01 , dec -72 .the data from these two telescopes were merged to create a complete image of the hi column density of the smc , with a continuous sampling of spatial scales from 30 pc to 4.5 kpc . for more information on the merging process ,see stanimirovic et al .the effective angular resolution of the combined column density image is 98 , implying a spatial resolution of 30 pc at a distance of 60 kpc .the effective smoothing scale is given here in terms of the effective hpbw and is always greater than 30 pc . for the 150x150 pixel region , ( ) . using a larger than 15% ofthe image length resulted in a genus curve that was useless - both the high - density and low - density regions coalesced together , resulting in too few regions for the genus statistic to analyze .the background was subtracted using a 5th order polynomial with subsequent filtering out of the first two fourier harmonics .an offshoot of the fast fourier transform ( fft ) package in idl was also utilized to study the hi column density image of the smc .the smc column density map was converted into fourier space and non - informative frequencies were removed ( see fig .( [ fft ] ) ) .an inverse fourier transform was then applied to recover the information .this methodology allows us to decompose the image into different frequency ranges , thereby probing the smc at different scales . by removing the low frequencies , it is possible to focus on the small scale structure of the smc .similarly , by removing the high frequencies we decrease the resolution and focus on the large scale structure of the smc .this method works in conjunction with the smoothing scale methodology as described above .figure [ smc_entire ] shows the genus shift as a function of smoothing radius for a 400x300 pixel region enclosing the majority of the smc . at all smoothing scales ( ), the smc shows negative shift , being statistically significant at small ( ) and medium ( ) scales .the results from the 150x150 pixel regions can be seen in figure [ 150x150_genus ] and tab . [tab : shifts ] . at scales below 70 - 80pc , every sampled region shows a genus curve with apparent negative .furthermore , at these scales each region exhibits asymmetry , though this varies from region to region .five of the nine surveyed regions have a larger amplitude on the negative ( low - density ) side of the genus curve .we can infer from this asymmetry that the low - density holes are more isolated than expected while the high - density clumps are more contiguous than expected .combined with the negative genus shift , these two statistics show that there are merged high - density clumps surrounded by isolated low - density holes .the negative shift can be attributed to the numerous small clumps of gas which compose the ism of the smc . at the small scales at which the genus is probed ,the numerous shells dispersed throughout the smc are not seen .the genus shift curves for the individual regions diverge as the smoothing scale increases . at scales of 120 to 150 pc ,two of 150x150 regions show a positive genus , implying `` swiss - cheese '' topology .we can conclude from the rising genus shift that the small clumps are merging together while the holes and sgss are coming into focus .four regions have a negative shift at large scales ( 150 - 250 pc ) , and one of them shows mixed behavior ( positive genus at 120 - 150 pc and negative genus at 220 - 270 pc ) .the error bars obtained by randomly applying phases to fourier modes as described in 2 show that for some scales and for some parts of the smc our results are more reliable than for others .the example of genus curves for an individual region with mixed topological behavior for different is shown on fig .[ genera ] .the genus shift estimated in the previous section can give us insight into the underlying physical processes of the smc . referring to the genus shift of the entire smc ( see figure [ smc_entire ] ) , it is readily apparent that the shift varies according to the smoothing scale . at the smallest scalesstudied , the genus shift has a negative value , implying a strong clump topology .we can infer that the clumps are caused by clouds of hi gas , as well as the numerous knots and filaments that compose the smc . at medium scales( 120 - 200 pc ) , the genus shift takes on a neutral or slight positive value .this increase in the genus shift can be connected to the abundance of shells that compromise the smc , as the shells which are interspersed throughout the smc have a mean radius of pc .this is consistant with our results , as the largest positive values of the genus shift occur between 120 and 150 pc . at the largest scales studied ( 170 pc ) , the genus shift takes on a slight negative value ; this is due to the prominent ` wing ' and ` bar ' features , indicating that at even the largest scales , ( large ) clumps dominate the topology . from our results , we reached the conclusion that the smc tends to exhibit a clump topology .the dominance of the `` meat ball '' topology is expected in the case of supersonic turbulence ( kowal et al . 2007 ) .several of the surveyed regions display characteristics similar to those of supersonic ( low case ) an asymmetrical genus curve with a tail that extends into the high - density portion of the plot ( see , figure 18 for more information ) . however , other possible reasons for a clump topology could be cooling instability and self - gravity of the hi gas . however , our results differ from the results in kim & park ( 2007 ) who found that the hi distribution in the lmc shows mainly hole topology at intermediate scales , despite the fact that fewer shells are present in the lmc ( 124 as compared to in the smc ). there are several important factors that could contribute to this difference .firstly , kim & park ( 2007 ) performed genus analysis on the hi peak brightness temperature image , while our work used the hi column density distribution .peak brightness images emphasize more small - scale and shell structure , while the column density distribution emphasizes density distribution washing out small - scale fluctuations that are caused both by density and velocity fluctuations . secondly ,while the lmc has an almost face - on hi disk , the smc has a larger inclination and a non - disk - like morphology .in fact , several authors have claimed a large line - of - sight depth of the smc ( see stanimirovic , staveley - smith , jones 2004 for details ) .therefore , any line - of - sight through the smc integrates over a longer physical depth . it would be interesting to apply our procedure on the lmc hi column density image for a more direct comparison .interestingly , for several regions where a hole topology is detected , the estimated shell size is comparable to the one from kim & park ( 2007 ) . a possible reason for a hole topology in regions 7 and 9 could be related to the shells created by supernovae explosions and stellar winds .it is interesting to note that those two regions are both off the main ` bar ' of the smc and most likely correspond to areas of low line - of - sight depth .nigra et al .( 2008 , in preparation ) study the eastern wing region ( our region 9 ) and find a small line - of - sight thickness .our region number 7 has been identified in stanimirovic et al .( 1999 ) and as containing several ` orphan ' shells with high luminosity . hypothesize that these shells may be associated with an ancient chimney .we stress , that it is important to gauge the statistical techniques against numerical simulations . recently undertook a extensive investigation of density statistics in mhd turbulence , which included different statistical tools , including genus .they concluded that the genus statistic _ is _ is sensitive to the sonic mach number _ . in the case where the magnetic pressure dominates , i.e. the high- case , and subsonic mach number ( e.g. 0.3 ) the genus curve is highly symmetrical . for the low- cases and supersonic (e.g. 2.1 , 6.5 ) , the curve stretches into the positive ( high - density ) side and becomes increasingly non - symmetrical as increases .the end result is that it is possible to obtain from the plot of genus curve : the mach number is directly related to the length of the high - density tail .furthermore , the genus statistic also gives topological information .for values of low , the genus curve is symmetrical , implying that there are equal numbers of high - density clumps and low - density holes .as increases , turbulence creates more high - density structures , which causes the tail of the genus curve to extend towards the high - density side .this corresponds well to the results of studying fractal dimension of density while varying the density threshold in kowal & lazarian ( 2007 ) . in our analysiswe assumed that the hi gas is optically thin .this is an important assumption , as the analysis of the effects of absorptions in turbulent gas in lazarian & pogosyan ( 2004 ) suggests that absorption introduces a critical spatial scale for plane - of - sky statistics , with the fluctuations larger than this critical scale being strongly affected by absorption effects .this means that the genus the way we use it above is not applicable to data , but should be applicable to c data , with data is in limbo , as this isotope is frequently thick .potentially , the topology of gas at scales less than the critical scale is also of interest .however , lazarian & pogosyan ( 2004 ) study suggests that fluctuations of the integrated self - absorbing spectral line may be dominated by velocity caustics , provided that the spectrum of density is sufficiently steep , e.g. , .the kolmogorov spectrum corresponds to and is steep according to the aforementioned definition .the topology studies are , as we discussed above , are different from the studies of spectrum .however , we believe that the criteria for density fluctuations being observable is the same . therefore , we expect that for one can analyze genus for the scales less than the critical one .note , that in terms of the present study , the constancy of the spectral indexes observed in stanimirovic & lazarian ( 1999 ) provides an additional evidence of hi not being substantially affected by absorption .in the paper above , we have analyzed the hi column density map of the smc in an attempt to elucidate its topological features .a brief summary of our results is as follows : * we have extended the genus analysis for column densities of diffuse gas via presenting a new procedure for estimating of topology indicator and its variance .* at small scale of smoothing ( ) the smc exhibits a negative shift , indicating a clump or `` meatball '' topology .we conjecture that this is due to the numerous clumps of gas created by supersonic turbulence .we know from numerical simulations that numerous high contrast clumps are produced by such a turbulence .* as the smoothing scale increases ( ) , the shift of the genus curve becomes less negative , trending towards a slight positive shift .this can be attributed to the averaging of small clumps , while larger shell and sgs structures throughout the smc are less affected by smoothing . at these medium scales , the smaller gas clumps are less important , while the shells come into focus .these shells are potentially a result of stellar winds and sne from ob associations . * for larger regions with scales 100pc the genus curve becomes noisier , however in four cases the correspondent negative shift may indicate that most of the shells have sizes less than the smoothing scale . * the nine 150x150 pixel regions of the smc exhibit slightly different trends .although they all possess a clump topology at small scales , the curves at larger scales are rather different .some trend towards a hole topology at larger while others exhibit no positive genus shift .a possible reason for hole topology in regions 7 and 9 could be related to the shells created by supernovae explosions and stellar winds .we hope that in future the regions with particular topology can be identified with regions of physically distinct behavior .we may infer that the smc is somewhat heterogeneous from region to region .* genus analysis is an effective complementary tool in the study of turbulence .the power spectrum contains information on velocity fluctuations but does not possess topological information . combined with genus statistics , both the velocity statistics and topological information can be obtained for a selected object .we thank peter coles for his important input on the genus analysis and dmitry pogosyan for his suggestions .j.g . would like to acknowledge the help he received from a.c . , a.l and s.s . during the summer reu program .is supported by the nsf research experience for undergraduates ( reu ) program .acknowledges nsf grant ast 0307869 and the center for magnetic self - organization in astrophysical and laboratory plasmas .we also thank the anonymous referee for a number of valuable points .lazarian , a. 1999 , plasma turbulence and energetic particles in astrophysics , proceedings of the international conference , cracow ( poland ) , 5 - 10 september 1999 , eds .: micha ostrowski , reinhard schlickeiser , obserwatorium astronomiczne , uniwersytet jagielloski , krakw 1999 , p. 28 - 47 . , 28 lllll 1 & 1500 - 3000 & 3000 - 4500 & 35 - 70 & + 2 & 2000 - 3500 & 2000 - 3500 & 35 - 70 & + & & & 150 - 170 & + 3 & 2800 - 4300 & 3600 - 5100 & 35 - 70 & + 4 & 3000 - 4500 & 2000 - 3500 & 35 - 50 & + 5 & 3300 - 4800 & 1900 - 3400 & 35 - 40 & + 6 & 4000 - 5500 & 2000 - 3500 & 35 - 40 & + & & & 200 - 270 & + 7 & 4000 - 5500 & 4000 - 5500 & 35 - 100 & + & & & 120 - 150 & + & & & 220 - 230 & + 8 & 4000 - 5500 & 500 - 2000 & 35 - 50 & + & & & 170 - 250 & + 9 & 500 - 2000 & 2000 - 3500 & 35 - 50 & + & & & 120 - 150 & +
in this paper , genus statistics have been applied to an hi column density map of the small magellanic cloud in order to study its topology . to learn how topology changes with the scale of the system , we provide the study of topology for column density maps at varying resolution . to evaluate the statistical error of the genus we randomly reassign the phases of the fourier modes while keeping the amplitudes . we find , that at the smallest scales studied ( ) the genus shift is in all regions negative , implying a clump topology . at the larger scales ( ) the topology shift is detected to be negative in 4 cases and positive ( `` swiss cheese '' topology ) in 2 cases . in 4 regions there is no statistically significant topology shift at large scales .
the application of metamaterials to cloaking and invisibility has been explored in several exciting papers in recent years . in these contributions , several alternative realizations and techniqueshave been discussed , and the exotic properties of several classes of metamaterials have been shown to be possibly tailored in order to provide much reduced scattering from finite - sized objects in different configurations and schemes , for a wide range of frequencies of interest .recent reviews of the various possibilities are available for the interested reader ( see e.g.refs . ) .one such possibility takes advantage of the anomalous scattering response of thin plasmonic layers . as shown in ref . , artificial plasmonic materials with low or negative effective permittivity may provide scattering cancellation via their local negative polarizability .this technique , called plasmonic cloaking , is consistent with earlier works that have speculated how a composite particle combining positive and negative permittivity may provide identically zero scattering in the static limit .recent study of the dynamic case has shown that plasmonic cloaks may suppress not only the dominant dipolar scattering from moderately sized objects , but also higher - order multipolar orders for larger scatterers . in this vein, it is worth stressing that by `` cloaking '' we mean strong , or maximized , scattering reduction over a finite frequency band not necessarily complete invisibility , since residual scattering orders may always make the scatterer detectable to a certain extent .still , significant reduction of visibility is achievable with the proper plasmonic cloak design , as shown in several recent papers and as discussed in the following .plasmonic cloaking has been shown to offer several intriguing properties in a variety of setups .examples include intrinsic robustness to frequency and design variations , straightforward extension to arbitrary collections of objects and multi - frequency operation .moreover , the admission of fields inside the cloaked region , peculiar to this technique , may be used to suppress the inherent scattering from receiving antennas and sensing devices , which may open several interesting venues in non - invasive probing and sensing applications .extensions to ultrathin surface cloaks have also been put forward in this same context .all these studies were conducted , for simplicity , for a canonical spherical object , illuminated by a generic polarization of the impinging field .recent studies have shown that analogous concepts may be applied to more complex 3d geometries , based on the overall robustness of the integral cancellation effect .however , in several practical applications , in particular for the radar community , elongated objects may become of specific interest for these applications . ref . provides quasi - static formulas for infinite circular dielectric cylinders under normal incidence , which was later extended to 2d infinite conducting cylinders , also illuminated at normal incidence in ref .these results were also preliminarily extended to oblique incidence in ref .. moreover , in ref . the results of ref . for multi - frequency operation of spherical cloaks were extended to dielectric infinite cylinders . also , recent theoretical and experimental efforts reported the practical realization of plasmonic cloaking in 2d cylindrical geometries , proposing metamaterial designs for specific polarization of interest .the literature on plasmonic cloaking applied to cylinders , however , has often dealt with idealized 2d geometries : infinite cylinders , and incident waves normal to the cylinder axis , with specific polarization properties .this assumption , common to several other cloaking techniques applied to cylinders , makes the resulting calculations quite limited from a practical standpoint .it may be argued , in particular , that once the angle of incidence is modified , and the end effects of finite - length cylinders are considered , such cloaking effects may be severely limited , if not completely lost . in ,the effects of truncation were preliminarily considered for normal incidence . herewe analyze all these issues in great detail , first deriving a general cloaking theory for arbitrary illumination of an infinite cylinder .we show that it is indeed possible to find a suitable , robust plasmonic cloaking layer in this scenario , which under suitable conditions may operate over a broad range of incidence angles .we corroborate these findings with an extensive numerical analysis of finite length and truncation effects in practical cylindrical geometries , studying the overall scattering reduction for different angles of incidence and several design parameters .the present analysis applies to idealized metamaterial cloaks with isotropic properties .we leave to future work the practical limitations introduced by the specific realization of engineered metamaterial cloaks .consider the geometry of fig . [fig : cyl ] , depicting a circular cylinder of finite length , radius , permittivity and permeability , covered by a thin conformal cylindrical cloak shell of thickness , permittivity and permeability . in this section ,we examine the limiting case of an infinite circular cylinder ( ) illuminated by an arbitrary plane wave at oblique angle .the general scattering problem may be solved by expanding the impinging and scattered fields in cylindrical harmonics ( see e.g. refs . ) , which we apply to derive the scattering response as a function of incidence angle . anddiameter is covered uniformly , except the ends , by a metamaterial shell of radial thickness ( ) .the object is illuminated by an arbitrarily polarized plane wave , incident at an angle with respect to the cylinder axis ., width=177 ] without loss of generality , the incident wave may be decomposed into its transverse - magnetic ( tm ) and transverse - electric ( te ) components , with respect to the cylinder ( ) axis . by matching boundary conditions at the two radial interfaces ,the problem may be solved exactly for each cylindrical harmonic .analytical results for normal incidence , specifically applied to the cloaking problem at hand , were derived in ref . . in that special case ( ) ,only two tangential field components are non - zero at each interface : for tm polarization , and for te .this ensures that the scattering problem is easily solved as a set of four equations , and the te and tm mie scattering coefficients are easily expressed as a function of determinants , and : [ eq : mie ] c_n = for either tm or te polarization .we have assumed here and in the following an time dependence .the overall scattering efficiency , defined as the ratio of the total scattering cross section normalized to the cylinder s physical cross section , is given by : [ eq : sum ] c_s = _ n=-^ ( |c_n^tm|^2 + electrical and geometrical properties , the cylinder s visibility can be minimized by canceling the dominant scattering orders , which is possible under the condition . in the quasi - static limit ,i.e. , for very thin cylinders , approximate closed - form expressions for these cloaking conditions for te and tm incidence were reported in refs . . scattering for oblique incidence is a more challenging problem , since polarization coupling is usually involved , as discussed in the appendix [ app ] .this implies that for oblique incidence purely te or tm cylindrical waves may excite also tm and te scattered waves , respectively. this feature will play an important role when analyzing the response of the cloak for different incidence angles . in the special circumstance for whichthis coupling may be absent , or minimized , e.g. , as discussed in the appendix , for azimuthally symmetric modes ( ) , for conducting or high - contrast cylinders , or for small cross - sections , the scattering coefficients may be still expressed as in eq .( 1 ) , where and have generalized expressions : [ eq : un ] u_n^tm = j_n^(k^t a ) & j_n^(k_c^t a ) & y_n^(k_c^t a ) & 0 + 0 & j_n(k_c^t a_c ) & y_n(k_c^t a_c ) & j_n(k_0^t a_c ) + 0 & j_n^(k^t a_c ) & j_n^(k_c^t a_c ) & j_n^(k_c^t a_c ) and [ eq : vn ] v_n^tm = j_n^(k^t a ) & j_n^(k_c^t a ) & y_n^(k_c^t a ) & 0 + 0 & j_n(k_c^t a_c ) & y_n(k_c^t a_c ) & y_n(k_0^t a_c ) + 0 & j_n^(k^t a_c ) & j_n^(k_c^t a_c ) & y_n^(k_c^t a_c ) , + where and are the cylindrical bessel functions of the first and second kind of order , are their derivatives with respect to the argument , and are the characteristic impedances of each region .the te expressions may be found by electromagnetic duality : .these expressions coincide with those in ref . for normal incidence ( ) , as expected .the dependence on the angle of incidence is encoded in the transverse wavenumbers , .specifically , , where is the wave number component along the cylinder axis . in particular , , as expected . in the general case , where polarization coupling will occur ,[ eq : mie ] should be modified to include the tm te coupling coefficients , consistent with the derivation in the appendix and the results reported in [ 36 ] . as in the spherical scattering scenario and cylindrical scenario at normal incidence , it is instructive to first consider the case of electrically small cylinders , i.e. . in this limit ,as we discuss in the following , the analysis is made simpler by the fact that : ( a ) the dependence of eqs .( [ eq : un],[eq : vn ] ) on the incidence angle is negligible ; consequently , so is the cross - polarization effect on the cloaking conditions ; ( b ) the scattered wave is dominated by a limited number of multipolar orders .the combination of these features makes the plasmonic cloaking easier to achieve , more effective and robust , compared to larger cylinders .this is expected , since satisfying the condition for only few dominant terms in eq .( [ eq : sum ] ) is easier to achieve , and such conditions are less dependent on the angle of incidence in this regime . in particular , it is easy to show that for regular dielectric cylinders ( , ) , the dominant terms in eq .( [ eq : sum ] ) are and , respectively for tm and te excitations . for arbitrary polarization, scattering dominates .this also applies to a conducting cylinder , whose limiting case corresponds to and in eqs .( [ eq : un],[eq : vn ] ) . for a magnetic cylinder( , ) dual considerations apply , so dominates . as discussed in the appendix , for electrically small cylinders ( ) ,the cross - coupling terms vanish for any angular order .we may thus derive the quasi - static cloaking condition by simply taking the first - order taylor expansion of the coefficients of eq .( [ eq : mie ] ) .this leads to simple closed - form solutions for the appropriate ratio of cloak to core radii to achieve cloaking in the quasi - static limit for each scattering coefficient , consistent with analogous formulas available in the spherical and cylindrical normal - incidence scenarios : {\frac{(\epsilon_c-\epsilon)(\epsilon_c+\epsilon_0 ) } { ( \epsilon_c-\epsilon_0)(\epsilon_c+\epsilon ) } } \\ \label{eq : fot - gen } c_0^{\rm tm } \ , : \quad \frac{a_c}{a } \ ; = \ ; & \sqrt{\frac{\epsilon_c-\epsilon}{\epsilon_c-\epsilon_0 } } \\\notag c_{n\ne0}^{\rm tm } \ , : \quad \frac{a_c}{a } \ ; = \ ; & \sqrt[2n]{\frac{(\mu_c-\mu)(\mu_c+\mu_0 ) } { ( \mu_c-\mu_0)(\mu_c+\mu)}}\end{aligned}\ ] ] for perfectly electric conducting ( pec ) cylinders , a case of particular interest for cloaking applications at radio frequencies , we find : {\frac{\epsilon_c+\epsilon_0 } { \epsilon_0-\epsilon_c } } \\\notag c_{n\ne0}^{\rm tm } \ , : \quad \frac{a_c}{a } \ ; = \ ; & \sqrt[2n]{\frac{\mu_c+\mu_0}{\mu_c-\mu_0}}\end{aligned}\ ] ] we note that ref . misreported that there is no possible quasi - static condition for .the correct formula , shown above , results from taking the proper limits in our previous analysis for and in .this is particularly relevant for oblique incidence , since enforcing only to model a pec boundary , as suggested in ref . , would not ensure zero normal component of the magnetic field on the boundary , as required in the pec limit .as correctly reported in , there is no quasi - static condition for the coefficient in the pec scenario .a few important points should be highlighted regarding eqs .( [ eq : fot - gen]-[eq : fot - pec ] ) .first of all , eq . ( [ eq : fot - gen ] ) for te polarization formally coincides with the derivation of ref .the formulas here have been obtained for completely generic oblique incidence , generalizing the previously published result .they show that in the long wavelength limit , cloaking conditions are _ unaffected by an arbitrary variation of the angle of incidence_. in this regime , the tm te polarization cross terms are of second - order ( cf . appendix [ app ] ) , thus eq .( [ eq : fot - gen ] ) ensures that an electrically small magnetodielectric cylinder may always be cloaked using its dominant scattering order in the taylor expansion , regardless of the angle of incidence . as in the spherical scenario , these quasi - static formulas split the role of permittivity and permeability between tm and te polarizations .special attention should be paid to the azimuthally symmetric modes , for which the role of permittivity or permeability is reversed compared to higher - order modes . moreover , again as in the spherical case , the cloaking conditions depend only on the ratio , implying that a thin homogeneous shell may be employed to suppress the relevant scattering orders in this quasi - static scenario .it should be stressed that eqs .( [ eq : fot - gen]-[eq : fot - pec ] ) may not be met by arbitrary values of the constituent parameters .rather , they should lie in specific ranges of permittivity , due to the simple physical constraint on geometry . from a practical standpoint , an electrically thin dielectric cylinder , for which the coefficient is dominant ,may always be cloaked by a thin shell with . under the first of the conditions in eq .[ eq : fot - gen ] , most of the scattering may be suppressed , although significantly negative may be required to achieve a very thin shell . by duality, an electrically thin magnetic cylinder will require . for such infinite cylinders ,the cloaking conditions may in general become trickier than in the spherical case , since the azimuthally symmetric cylindrical waves follow special cloaking conditions : they are permittivity - based for zeroth - order when higher order are permeability - based , and vice versa .this is particularly relevant for the conducting scenario , for which the dominant coefficient may not be canceled at all in the long - wavelength limit ( cf .( [ eq : fot - pec ] ) ) .it should be emphasized that it is still possible to suppress scattering in the fully dynamic scenario , since it may always be canceled with its dynamic expression . however , in the long - wavelength limit , the required wave number in the cloak considerably grows , implying that quasi - static considerations such as those used in eq .( [ eq : fot - pec ] ) may not be applied .however , it is evident that conducting objects are significantly more challenging to cloak for cylinders than spheres .the physical reason for this difference is evidently related to the fact that electrically thin cylinders are _ not _ electrically small objects .these systems are still infinite in the axial direction ( for this analytical treatment ) , which implies that their scattering may not necessarily be small above all when conduction currents may be induced in the direction , as with the tm cylindrical wave . to assess the validity of the previous approximate cloaking conditions in the fully dynamic scattering problem , we present some numerical examples of interest in this section , solved using the exact analytical theory reported above .consider first the case of a dielectric cylinder with and normalized diameter , covered by a thin uniform cloaking shell with .[ fig : th - oblique]a shows the variation of the total scattering efficiency as a function of the cloak permittivity and of the angle of incidence of a tm illuminating plane wave .it is immediately apparent that the use of negative - permittivity thin cloaks may significantly reduce the overall scattering efficiency of the cylinder , and in particular minimum scattering is obtained very close to the corresponding quasi - static cloaking condition for in eq .( [ eq : fot - gen ] ) , which would yield .moreover , minimum scattering is achieved at the same value of negative permittivity , independent of the angle of incidence , consistent with the previous theoretical results , despite some polarization coupling for oblique angles .overall scattering reduction , compared to no cloak ( ) , is especially significant in the normal incidence case , which provides over 50 db reduction . for oblique incidence ,the coupling with te coefficients affects the cloaking performance and generates residual scattering at the cloaking condition , but it is seen that the same ( negative ) value of cloak permittivity may provide significant scattering reduction ( over 10 db ) for a wide angular range without drastic changes . for smaller incidence angles, we also observe excitation of a plasmonic resonance for slightly negative values of .this is associated with the condition , obtained here in the quasi - static limit for , consistent with findings on quasi - static plasmonic resonances in layered spheres and cylinders .it is evident that the resonance is not excited at normal incidence , but arises as soon as oblique incidence is considered , due to cross - polarization coupling .this explains the sharp peak for small negative values of in fig .[ fig : th - oblique]a. analogously , for low positive values of , a second small dip in the scattering efficiency appears at , associated with the cloaking condition for = 0 in eq .( [ eq : fot - gen ] ) . here , this provides the solution . once again, this dip does not appear at normal incidence , due to the lack of coupling between polarizations for . to conclude this discussion, it should be underlined that , in absolute value , scattering is larger for normal incidence in the uncloaked scenario ( in the figure ) , as expected , due to the larger component . on the other hand , the proper cloak design may maximally suppress scattering in this worst - case condition , which is an inherent property of this cloaking approach . and : + ( a ) dielectric ( , ) and ( b ) magnetic ( , ) ; for three different angles of incidence of tm-polarized radiation . in the electrically thin limit ,the cloaking conditions have weak dependence on the angle of incidence.,width=623 ] as a second example , consider the same cylindrical geometry , but with magnetic properties ( ) , again excited by a tm wave .all other parameters are kept the same .the cylinder s scattering efficiency is shown in fig .[ fig : th - oblique]b .as anticipated above , this configuration gives much weaker scattering in the uncloaked scenario ( ) , due to its small size and lack of dielectric contrast with the background .in fact , the coefficient is negligible in this case .residual scattering is dominated by the higher - order coefficient , which may be canceled with or ( note the dual behavior compared to in fig . [fig : th - oblique]a ) .two clear scattering dips are visible around these two values , which also in this scenario do not depend on the angle of incidence , consistent with eq .[ eq : fot - gen ] .the absence of scattering from higher - order modes and of coupling for the modes ensures that overall cloak performance is unchanged by variations of the incidence angle , as evidenced by the different curves in fig .[ fig : th - oblique]b .it should be stressed that the above numerical analyses neglect ohmic absorption losses in the cloak and/or core materials .we include these effects in the following sections .regardless , the plasmonic cloaking technique has been shown to be inherently robust against moderate absorption [ 20 ] , since it is not based on a resonant effect .this implies that the curves in fig .[ fig : th - oblique ] would remain practically unchanged near the cloaking regions when moderate losses are considered . on the other hand , the large scattering peaks associated with plasmonic resonances would be significantly dampened by realistic losses .this and the previous considerations imply that : * the plasmonic cloaking technique may be successfully applied to infinite cylinders . * the quasi - static conditions , eqs .( [ eq : fot - gen]-[eq : fot - pec ] ) , hold to a very good approximation for thin cylinders , , and ensure dramatic scattering reduction in this limit .* cloaking conditions for electrically thin cylinders are very weakly dependent on . thus , cloak designs for electrically thin cylinders are robust against the angle of incidence for any scattering order and polarization .we explore numerically in the following sections how these properties are affected by considering thicker geometries and truncation effects .for a given infinite cylinder of radius and permittivity , a single - layer plasmonic cloaking shell may be optimized by varying its two design parameters ( or ) and , using the analytical results derived in section [ sec : anal ] . in general , it is preferable to choose thin shell thicknesses , since drastically increased cross - sections imply reduced bandwidths and larger sensitivities to the design parameters .this simple cloaking design inherently limits the overall total thickness of cylinders that we may cloak , since only few scattering orders may be drastically suppressed independently .magnetic properties of the cloak as well as multi - layer designs may be considered to increase the available degrees of freedom and size of objects to be cloaked . in this work , however , for sake of simplicity we limit our interest to non - magnetic cloaks , and we consider moderate cross - sections of the objects to be cloaked . in particular , in the following we consider cylinders with diameters and relative permittivities of 3 and 10 in the dielectric scenario , in addition to a pec cylinder .since the previous theoretical discussion shows that infinite cylinders may be effectively cloaked , as anticipated , we consider overall lengths l of several times the diameter , which may be comparable to or larger than the wavelength of operation . because of the tm te polarization coupling described in the previous section ( cf . also the appendix [ app ] ) , completely general cloak optimization would be quite complicated , and in general depend on the incidence angle and polarization of excitation .we choose instead to optimize cloak response at normal incidence which is usually the angle for which larger scattering is produced then examine the response at oblique angles with numerical simulations .more extensive cloak optimization would involve the choice of a cost function and the application of global ( e.g. genetic algorithms ) and local ( e.g. quasi - newton ) optimization techniques across all angles and polarizations , also as a function of the observer s position .in general , further optimization may be achieved by considering magnetic cloaks , as discussed above .we will analyze these aspects in the near future . to provide insight into the effects of the various parameters involved in cloak design , we examine the variation of scattering efficiency with and for te or tm plane wave normal incidence in the infinite ( 2d ) scenario , using the formulation developed in the previous section . to that endwe write the scattering gain as : [ eq : costfn ] q_s ( , _ c ) = . is the scattering efficiency from eq .( [ eq : sum ] ) , and the superscript represents the uncloaked case , calculated with eqs .( [ eq : mie]-[eq : vn ] ) , in the limit and . in general ,the above discussions show that it is not possible to achieve large scattering reduction for both tm and te illumination simultaneously with a single - layer permittivity cloak at the same frequency . for moderate cross - sections , however , such as those considered here , scattering is generally dominated by one of the two responses .it is thus best to design for maximal suppression of that polarization .hence , we focus on tm waves , which dominate scattering from infinite dielectric and conducting cylinders in the long wavelength limit ( cf . quasi - static analysis in the previous section ) .ideally , the cloak is designed to provide minimized scattering gain for normal incidence . for our calculations we used mathematica and truncated the mie summations at order , which ensured convergence for the cross - sections considered here .[ fig : contourscan ] shows scattering gain contour plots for infinite cylinders of diameter , relative permittivities ( fig .[ fig : contourscan]a ) and ( fig .[ fig : contourscan]b ) , and tm normal incidence .the plots show alternating loci of resonant peaks ( large , light color ) and cloaking regions ( near - zero , dark color ) .as expected , thinner cloaks ( smaller ) yield larger bandwidths , reflected by wider cloaking regions .this implies that the ideal cloak design would utilize , as expected from the results in sec .[ sec : anal ] , from simple physical considerations and analogous results in the spherical scenario .the cloaking loci for large positive values of are associated with anti - resonances that arise when the shell thickness is comparable to the wavelength in the shell material .these anti - resonances are narrow - bandwidth and highly sensitive to the design parameters , and they necessarily lie in close proximity with scattering resonant peaks , which makes them not suitable for a robust cloaking design .[ fig : eps - opt ] shows constant slices of fig .[ fig : contourscan ] to better illustrate these characteristics .these slices illustrate how choices typically produce lower scattering gain , i.e. better cloaking , away from dangerous resonant enhancements .this is especially true when cloaking low - density dielectric cylinders .( -axis ) and ( -axis ) , for core permittivities as labeled .note the resonant enhancements and the cloaking regions.,title="fig:",width=272 ] ( -axis ) and ( -axis ) , for core permittivities as labeled .note the resonant enhancements and the cloaking regions.,title="fig:",width=272 ] + ( a ) ( b ) as a function of cloak permittivity , for slices of constant cloak thickness , , in the contour plots of fig .[ fig : contourscan ] , for core permittivities ( left ) and ( right).,title="fig:",width=272 ] as a function of cloak permittivity , for slices of constant cloak thickness , , in the contour plots of fig .[ fig : contourscan ] , for core permittivities ( left ) and ( right).,title="fig:",width=272 ] we also considered optimization of plasmonic cloaks for pec cylinders of the same size .however , as noted in the quasi - static limit , here the dominant scattering contribution ( via the coefficient ) , may not be canceled in the long - wavelength regime .this implies that in the corresponding contour plots there would not be cloaking regions for thin negative - permittivity shells , as in the dielectric scenario .moderate scattering reduction may be achieved with large- thick shells , but as outlined above this implies large sensitivity to frequency , design variations and closely - spaced resonant peaks , which may be excited at different incidence angles .it is thus evident that a simple permittivity cloak may not be sufficient to adequately cloak a pec cylinder and , as in the spherical scenario , it may require magnetic permeability different from the background for robust scattering reduction . in the following , therefore , we mainly focus our design efforts on dielectric cylinders .the previous results show that optimal cloak configurations for dielectric cylinders are based on negative permittivity metamaterials and thin cloak shells .we should point out that negative values of effective permittivity may indeed be achieved in the microwave or thz frequency ranges using various metamaterial geometries , such as wire media or parallel - plate implants , and they are naturally available at larger frequencies. in particular , the parallel - plate implant technology may be particularly well - suited for cloaking incident tm waves , as it has been already demonstrated theoretically and verified experimentally for normal incidence at microwave frequencies .we assume here , for sake of simplicity , that the required value of effective permittivity is available for the shell geometry of interest and that the cloak material is isotropic .possible anisotropy for a specific metamaterial realization , inherent in some proposed realizations , may affect cloak performance for different incidence angles , but our preliminary results show that for thin cloaks such effects may be minor .thus , for simplicity we always assume idealized isotropic metamaterials in the following .the requirement that any passive metamaterial has a frequency dispersion ensuring is met here by assuming a drude dispersion model of the form : [ eq : drude ] _ c ( ) = 1 - . here and in the numerical simulations in the next section , we calculated the plasma frequency to ensure that at the design frequency the real part of yields the required value .the damping frequency , associated with the level of losses in the metamaterial , has been always assumed in the following to be .this value provides a moderate amount of loss , comparable with practically - realizable metamaterial geometries at these frequencies .table [ tab : opt ] summarizes an extensive campaign of optimizations that we performed to cloak different cylindrical geometries , as outlined above , considering several cloaking designs .inspecting table i , we see that thin cylinders ( ) , consistent with the previous quasi - static analysis , require optimized cloaks with negative permittivity in the case of dielectric objects , and a large positive permittivity for conducting materials .the corresponding scattering gain may become extremely low , since only few mie coefficients effectively contribute to the total scattering , and they are properly canceled by the optimized cloak design . as a general rule of thumb , consistent with the previous considerations , thinner cloaks require larger values of permittivity , either positive or negative .relatively thicker cloaks relax the requirements on very negative permittivity of the cloak , but , as highlighted above , larger thickness also tends to produce additional scattering terms , which limits overall performance .however , lower absolute values of negative ( in the case of dielectric ) or positive ( for pec ) permittivity may be easier to achieve and be less sensitive to loss .a trade - off between cloak thickness , material reliability and overall cloaking performance should be considered .larger dielectric objects may be cloaked by thicker cloaks with positive values of permittivity , since the dynamic nature of the wave in the cloak may produce a negative polarizability even with positive materials in larger shells .however , the overall scattering reduction is less dramatic ; residual scattering is indeed expected for larger objects . in particular , as discussed above , conducting cylinders are most challenging to be cloaked against tm waves , due to the axial conduction currents induced on their surface .scattering reduction of around 3 db , however , may still be achieved even for conducting cylinders . moreover , large values of core permittivity are harder to cloak , and the residual scattering is larger than for a lower - permittivity core of the same size .our cloak designs in table [ tab : opt ] were optimized for tm normal incidence , the scenario with maximum scattering for the considered objects . at oblique incidence ,however , tm te coupling excites additional scattering modes , which may affect the overall response , in particular for thicker objects and moving toward grazing incidence .we discuss these features in detail in the next section ..parameters from the cloak optimization procedure described in sec .[ sub : opt ] for different objects and cloak thicknesses . corresponds to the scattering gain for normal - incidence tm illumination .[ cols="^,^,^,^,^",options="header " , ] fig .[ fig : angles ] shows the variation of scattering gain for the cylinder of fig .[ fig : uvc ] varying the incidence angle , as depicted in fig .[ fig : cyl ] . even for small ( near grazing )incidence angles , the cloaking effects are barely different from the normal - incidence case . for smaller values of , cloaking is somewhat reduced at , due to the excitation of te scattered waves via cross - polarization coupling , as discussed in sec . [sec : anal ] .these results demonstrate strong agreement between the analytical calculations for infinite cylinders and the numerical simulations for finite , although some expected minor deviation does appear at small angles , due to end effects .we emphasized that , although the normalized scattering gain is reduced less for smaller angles , the overall rcs is significantly smaller in these cases , since the electric field component along the cylinder is comparatively shorter , consistent with the discussion in the previous section .the results of fig .[ fig : angles ] demonstrate convincingly that plasmonic cloaking may be applied to finite - length dielectric cylinders at oblique excitation , although at small one may be required to cloak the te contribution separately for improved performance .this is consistent with our findings in fig .[ fig : tmvte ] . , for various angles of incidence .( a ) results from numerical simulation .( b ) numerical simulation ( solid ) v. analytical infinite - cylinder results ( dashed ) for select angles.,width=642 ] the left panel of fig .[ fig : diams+eps ] shows the scattering gain under normal - incidence tm illumination for the cylinder of sec . [ sub : primary ] ( cf .[ fig : uvc ] ) next to that for a cylinder with twice that diameter , ( cf . table [ tab : opt ] for cloak parameters ) .the figure also compares these results with the infinite - cylinder analytical result of sec .[ sec : anal ] .end effects are negligible except at low , and the analytical results match very well the numerical simulations especially at frequencies where plasmonic resonances are not excited . cloaking is effective over a relatively broad bandwidth , even for the thicker cylinder .this implies that the designed cloaks may be effective over a relatively broad range of object size . the excellent agreement between the analytical curves for infinite cylinders and the simulation results for truncated geometriesindicate that truncation effects are negligible for cloaking performance and suggest the use of the previously derived analytical formulas for fast cloak optimization .a simple permittivity cloak , as considered here , is effective for significant scattering reduction for cylinders with diameters of the order of the wavelength .thicker cylinders may require the use of multilayer and/or magnetic cloaks , as discussed in ref . for spherical geometries . , but with core permittivities as labeled .the calculations used the optimized cloak designs as described in table [ tab : opt].,title="fig:",width=264 ] , but with core permittivities as labeled .the calculations used the optimized cloak designs as described in table [ tab : opt].,title="fig:",width=264 ] the right panel of fig .[ fig : diams+eps ] shows the scattering gain for the cylinder of sec .[ sub : primary ] ( cf .[ fig : uvc ] ) , for the same dielectric and also with a denser core , ( see table [ tab : opt ] ) . in the large- limit , this comparison underscores some of the challenges that may be involved in cloaking a conducting object , which would coincide with a dielectric core in the limit of very large permittivity .there is significant scattering reduction around the design frequency also in the denser scenario , but this example simultaneously supports even stronger cloaking at a lower frequencies , ghz .this effect , predicted by the analytical results for infinite cylinders , is associated with the frequency dispersion of the cloak , which matches the cloaking condition for the coefficient at lower frequencies ( for more negative values ) .effectively , this scenario presents a coincidental cloaking effect at another frequency .this does _ not _ imply that one could tune the second suppression arbitrarily , since its position depends on the natural metamaterial dispersion .however , one could tune its separation from to a limited degree if the cloak thickness were not fixed at . and variations in length by a factor of two , for three different angles of incidence under tm excitation .the cloaking design is robust even for very short lengths , almost up to .,title="fig:",width=529 ] and variations in length by a factor of two , for three different angles of incidence under tm excitation .the cloaking design is robust even for very short lengths , almost up to .,title="fig:",width=529 ] and variations in length by a factor of two , for three different angles of incidence under tm excitation .the cloaking design is robust even for very short lengths , almost up to .,title="fig:",width=529 ] fig .[ fig : incidence ] analyzes in more detail the effects of truncation for the case of fig .[ fig : uvc ] in sec .[ sub : primary ] , by considering different truncation lengths for several incidence angles : for the three panels , respectively .each compares different lengths , including the analytical result for infinite cylinders .the cloaking effect is indeed robust against variations in incidence angle , and truncation effects are quite moderate , as the scattering gain follows the line calculated analytically in the infinite cylinder geometry . even for the shortest length ( cm , a 2:1 aspect ratio ) , the full - wave simulation still follows the infinite - length case with surprising accuracy , above all near . in this case , scattering is characterized by small resonant peaks at lower frequencies , associated with longitudinal resonances due to the finite length .the results are nevertheless extremely encouraging , in particular for shorter cylinders , for which the cloaking design works extremely well in the frequency range of interest . figs .[ fig : h - field]-[fig : farfield ] illustrate the full - wave near- and far - field numerical results for the cylinder of sec .[ sub : primary ] ( fig .[ fig : uvc ] ) , illuminated by a tm wave impinging at an angle .the various figures compare in panel ( b ) the cloaked configuration at the design frequency with the uncloaked case ( a ) .consistent with fig .[ fig : th - oblique]a , the overall calculated rcs reduction at the central frequency is -14.2 db , and the following figures depict the effective functionality of the cloak in its near- and far - field regions .we have chosen to indicate the performance of the cloak to oblique excitation ; similar considerations apply for any other angle of incidence , which we have verified via detailed study .[ fig : h - field ] shows the ( normal ) magnetic field distribution on the e plane ( snapshot in time ) , comparing the uncloaked scenario ( a ) with the cloaked one ( b ) .several interesting features are noteworthy . in the uncloaked case, the wave penetrates the dielectric rod and experiences a wavelength shortening that effectively distorts the planar wave fronts on the back of the cylinder , producing significant shadow and scattered fields all around the object .the thin plasmonic metamaterial shell is able to re - establish the proper planar fronts just outside the cloak ( b ) , ensuring reduced scattering and suppressed visibility for an outside observer positioned anywhere in the near- or far - field of the object .it should be noted that this effect is obtained for oblique incidence , and the cylinder ends are uncloaked .additional improvement may be achieved by proper cloaking of these terminations .we may also easily observe how the plasmonic layer supports surface - plasmon waves traveling along the shell , as expected due to its negative permittivity .it is interesting to observe how these waves effectively cancel the residual scattering and restore the phase fronts to almost exactly match those of the original plane wave , had it traveled through free space instead .[ fig : farfield ] illustrates the far - field radiation patterns for this same cylinder , comparing on the same scale scattering from the uncloaked ( a ) and cloaked ( b ) objects , showing drastic suppression of the bistatic rcs at all angles .uncloaked scattering exhibits itself mainly as a shadow on the cylinder backside ; various higher - order scattering harmonics contribute to this residual scattering pattern .most of the scattering is suppressed by the cloak .panel ( c ) shows the cloaked residual scattering on a much smaller scale : as expected , scattering is not identically zero , and small lobes , associated with higher - order ( and more directive ) cylindrical harmonics not completely suppressed , are still present .their relevance , however , is very limited compared to the original scattering levels .\(a ) uncloaked cylinder ( b ) cloaked cylinder + plane of polarization for a tm wave at ( origin top right corner ) for the object of fig .[ fig : uvc ] . ( a )uncloaked ; ( b ) the cloak .severe uncloaked wavefront distortions are almost totally restored by the thin cloak .cloak interface nodes ( right ) are associated with plasmonic surface waves.,title="fig:",width=264 ] plane of polarization for a tm wave at ( origin top right corner ) for the object of fig .[ fig : uvc ] . ( a )uncloaked ; ( b ) the cloak .severe uncloaked wavefront distortions are almost totally restored by the thin cloak .cloak interface nodes ( right ) are associated with plasmonic surface waves.,title="fig:",width=355 ] \(a ) uncloaked cylinder ( b ) cloaked cylinder ( c ) cloaked cylinder ( zoomed in ) + ) cylinders of fig .[ fig : uvc ] , plotted on the same scale ; ( c ) is an enlargement of ( b ) , to show the residual detail of the scattering pattern , dominated by higher - order scattering modes .panels ( a ) and ( b ) demonstrate the dramatic scattering reduction from a properly tuned plasmonic cloak.,title="fig:",width=187 ] ) cylinders of fig .[ fig : uvc ] , plotted on the same scale ; ( c ) is an enlargement of ( b ) , to show the residual detail of the scattering pattern , dominated by higher - order scattering modes .panels ( a ) and ( b ) demonstrate the dramatic scattering reduction from a properly tuned plasmonic cloak.,title="fig:",width=224 ] ) cylinders of fig .[ fig : uvc ] , plotted on the same scale ; ( c ) is an enlargement of ( b ) , to show the residual detail of the scattering pattern , dominated by higher - order scattering modes .panels ( a ) and ( b ) demonstrate the dramatic scattering reduction from a properly tuned plasmonic cloak.,title="fig:",width=222 ]we have presented an extensive investigation of the application of the plasmonic cloaking technique to circular cylinders illuminated by plane waves of arbitrary polarization and angle of incidence .we have derived analytical formulas for the general oblique - angle scenario , and showed that in the electrically - thin limit there is no angular dependence on the cloaking response , i.e. the design formulas are independent on the angle of incidence . to study the characteristics of cylindrical cloaks ,we have designed a set of tm-optimized cloaks for a wide range of parameters and cylinders of interest , using the normal - incidence analytical formulas with realistic losses and metamaterial frequency dispersion implemented in a drude model .for the cloak design , we have focused on tm polarization because it dominates the scattering of moderately thick dielectric and conducting cylinders , which is of interest for several applications within the radar community .the optimized cloaks drastic scattering reduction was corroborated with full - wave numerical simulation , taking into account variations of the angle of incidence , core permittivity , cylinder diameter , and most importantly truncation effects due to finite length .we have found that , as predicted by the analytical formulas presented here , for elongated objects with diameters up to one - half the wavelength ( for which our cloaking technique is most effective ) , but with length comparable with the wavelength of operation , a simple one - layer permittivity cloak is very effective , providing significant scattering reduction highly robust to variations in the angle of incidence .performance is slightly weakened at near - grazing angles , for which tm te polarization coupling partially affects the overall performance of a single - layer cloak optimized for tm polarization alone .we note , however , that scattering approaching grazing angles is the weakest in absolute value , thus less important for achieving overall scattering reduction .the analytical theory developed here for infinite cylinders has been strongly corroborated by our numerical simulations for finite lengths , implying that the truncation effects do not significantly perturb the cloaking effect .we are currently working on the extension of these concepts to multi - layered metamaterial cloaks for suppression of multiple scattering coefficients , and contemporary suppression of te and tm scattered waves , which may increase the size of the cloaked objects and the angular range of operation .we are also currently pursuing an experimental realization of the plasmonic cloaking concept at radio frequencies for practical finite cylinders and cloaks .consider the geometry of fig . [fig : cyl ] , consisting of a circular cylinder of radius , permittivity and permeability , covered by a thin conformal cylindrical cloak shell of thickness , permittivity and permeability .when excited by an impinging infinite plane wave of arbitrary incidence angle and polarization , the general scattering problem may be solved in the limit of long cylinders ( ) by expanding the impinging and scattered fields in terms of cylindrical harmonics .the problem may be split into the orthogonal polarizations with -field transverse to the cylinder axis ( te ) and -field transverse ( tm ) . for a tm plane wave impinging at an angle from the cylinder axis , the incoming magnetic field may be written as : [ eq : h - field ] h_i = e^-iz e^-ik_0^tx , where is the electric field amplitude , and are the free - space characteristic impedance and wave number , is the wave number component along the cylinder axis , is the transverse component of the wave number , and the corresponding impinging electric field may be calculated using the curl maxwell s equations . expanding in cylindrical waves ,we may write electric and magnetic fields as : e_tm = & & + + u_i^tm , + [ eq : eh ] h_tm = & & - , where [ eq : uitm ] u_i^tm = e_0 i^-n ( ) j_n(k_0^t ) e^in e^-iz and is the cylindrical bessel function of order . analogous expressions for te waves may be derived using duality . using the orthogonality of cylindrical waves ,the boundary conditions at the radial interfaces may be met by assuming the existence of transmitted cylindrical waves in the two dielectric regions and a scattered wave in free - space , which may be written consistently with eq .( [ eq : eh ] ) , using the scalar potentials : u_1^tm = i^-n e_0 e^in e^-iz & c_1,n^tm j_n(k_1^t ) & a_c , for the fields induced in the core region , shell , and for the the scattered field , respectively .the core and cloak regions are labeled `` 1 '' and `` 2 '' , respectively , and `` s '' represents the scattered wave outside the cloak ; are the relevant transverse wave numbers for each region and . and are the cylindrical bessel functions of the first and second kind , for incoming and outgoing waves , while is the cylindrical hankel function .the complex scattering coefficients are the unknowns .analogous equations may be written for te waves .for normal incidence ( ) , for pec objects , or for the azimuthally symmetric mode , the problem is easily solved by matching the two non - zero tangential field components at each radial interface : e_s , z ( ) = & e_2,z ( ) & = a_c , + e_2,z ( ) = & e_1,z ( ) & = a , + h_s,(z ) = & h_2,(z ) & = a_c , + [ eq : bcs ] h_2,(z ) = & h_1,(z ) & = a , for tm(te ) polarization .this yields the familiar rank - four determinant expressions reported in eqs .( [ eq : un],[eq : vn ] ) .consistent expressions for the te coefficients may be obtained by applying duality . in the general case of oblique incidence on a dielectric cylinder ,however , the higher - order modes are characterized by all four independent tangential components of the fields at each interface , which may not be matched independently for each te or tm harmonic .the boundary conditions may instead be met , as derived in , by linearly combining te and tm harmonics of same order .the corresponding solution of an eight - by - eight system of equations , derived in , provides the exact expression of the scattering coefficients in the general case , to be used in eq .( [ eq : sum ] ) to derive the total scattering width of the cylinder .this form of polarization coupling is inherently associated with the asymmetry introduced by the oblique excitation for higher - order modes , and it may be avoided only in the case of conducting objects or for . in the quasi - static limit , however , the dependence of the arguments of the bessel functions in eqs .[ eq : pot ] on the transverse wave number is negligible as well ; therefore this form of cross - polarization coupling is negligible . in this limit ,( [ eq : un]-[eq : vn ] ) may be used for any angle of incidence , at the basis of the derivation of eqs .( [ eq : fot - gen]-[eq : fot - pec ] ) , which do not depend on .
metamaterial cloaking has been proposed and studied in recent years following several interesting approaches . one of them , the scattering - cancellation technique , or plasmonic cloaking , exploits the plasmonic effects of suitably designed thin homogeneous metamaterial covers to drastically suppress the scattering of moderately sized objects within specific frequency ranges of interest . besides its inherent simplicity , this technique also holds the promise of isotropic response and weak polarization dependence . its theory has been applied extensively to symmetrical geometries and canonical 3d shapes , but its application to elongated objects has not been explored with the same level of detail . we derive here closed - form theoretical formulas for infinite cylinders under arbitrary wave incidence , and validate their performance with full - wave numerical simulations , also considering the effects of finite lengths and truncation effects in cylindrical objects . in particular , we find that a single isotropic ( idealized ) cloaking layer may successfully suppress the dominant scattering coefficients of moderately thin elongated objects , even for finite lengths comparable with the incident wavelength , providing a weak dependence on the incidence angle . these results may pave the way for application of plasmonic cloaking in a variety of practical scenarios of interest .
recently , much attention has been paid to brain rhythms observed in scalp electroencephalogram ( eeg ) and local field potentials ( lfp ) with electrodes inserted into the brain .these brain rhythms emerge via synchronization between individual firings in neural circuits .population synchronization between neural firing activities may be used for efficient sensory and cognitive processing such as sensory perception , multisensory integration , selective attention , and working memory .many recent works have been investigated in diverse views of population synchrony .this kind of population synchronization is also correlated with pathological rhythms associated with neural diseases . here , we are interested in these synchronous brain rhythms . population synchronization has been intensively investigated in neural circuits composed of spontaneously firing suprathreshold neurons exhibiting clock - like regular discharges . for this case ,population synchronization may occur via cooperation of regular firings of suprathreshold self - firing neurons .in contrast to the suprathreshold case , the case of subthreshold neurons has received little attention . for an isolated single case , a subthreshold neuron can not fire spontaneously ; it can fire only with the help of noise .here we are interested in population synchronization between complex noise - induced firings of subthreshold neurons which exhibit discharges like geiger counters .recently , noise - induced population synchronization was studied by varying the noise intensity observed in a population of subthreshold neurons , and thus collective coherence between noise - induced firings has been found to occur in an intermediate range of noise intensity . in this paper, we investigate coupling - induced population synchronization which leads to emergence of synchronous brain rhythms by varying the coupling strength in an excitatory population of globally coupled subthreshold izhikevich neurons , and thus rich types of population synchronization are found to emerge . as an element in our coupled neural system , we choose a simple izhikevich neuron which is as biologically plausible as the hodgkin - huxley model , yet as computationally efficient as the integrate - and - fire model .these izhikevich neurons interact via excitatory ampa synapses in our computational study . for small individual neurons fire spikings independently , and thus the population state is incoherent .however , when passing a lower threshold , population spike synchronization occurs because the coupling stimulates coherence between noise - induced spikings . as in globally - coupled chaotic systems , this kind of transition between population synchronization and incoherence may be well described in terms of an order parameter ; in our case , the time - averaged fluctuation of the population - averaged membrane potential plays the role of . as further increased and passes another threshold , noise - induced burstings appear in individual membrane potentials , and population burst synchronization also emerges .in contrast to spiking activity , bursting activity alternates between a silent phase and an active phase of repetitive spikings . this type of burstings are known to play the important roles in neural communication .as continues to increase , the length of active phase in individual bursting potential increases , and eventually a transition from bursting to fast spiking occurs at a threshold .consequently , breakup of population burst synchronization occurs and incoherent states appear because individual fast spikings keep no pace with each other .however , as is further increased , coupling stimulates population synchronization between fast spikings in a range of . for population states become incoherent and slow spikings appear in individual membrane potentials . as a final step ,when passes a high threshold , coupling induces oscillator death ( i.e. , quenching of noise - induced slow spikings of individual neurons ) because each neuron is attracted to a noisy equilibrium state .this stochastic oscillator death in the presence of noise is in contrast to the deterministic oscillator death occurring in the absence of noise . at the population level , a transition from firing to non - firing states results from stochastic oscillator death .we also characterize the firing - nonfiring transition in terms of the time - averaged population spike rate which plays a role similar to that of the order parameter for the incoherence - coherence transition .in addition to the statistical - mechanical analysis using and , these diverse population and individual states are well characterized by using the techniques of nonlinear dynamics such as the raster plot of spikes , the time series of the membrane potential , and the phase portrait .we consider an excitatory population of globally - coupled subthreshold neurons . as an element in our coupled neural system, we choose the simple izhikevich neuron model which is not only biologically plausible , but also computationally efficient .the population dynamics in this neural network is governed by the following set of ordinary differential equations : with the auxiliary after - spike resetting : where .\label{eq : cizf}\end{aligned}\ ] ] we note that of eq .( [ eq : cizd ] ) was obtained by fitting the spike initiation dynamics of cortical neurons so that the membrane potential has mv scale and the time has ms scale . the state of the neuron at a time is characterized by three dimensionless state variables : the membrane potential , the recovery variable representing the activation of the ionic current and the inactivation of the ionic current , and the synaptic gate variable denoting the fraction of open synaptic ion channels . after the spike reaches its apex ( = 30 mv ) , the membrane voltage and the recovery variable are reset according to eq .( [ eq : rs ] ) .there are four dimensionless parameters , and representing the time scale of the recovery variable , the sensitivity of to the subthreshold fluctuations of , and the after - spike reset value of and , respectively .tuning the four parameters , the izhikevich neuron model may produce 20 of the most prominent neuro - computational features of cortical neurons .unlike hodgkin - huxley - type conductance - based models , the izhikevich model matches neuronal dynamics instead of matching neuronal electrophysiology .each izhikevich neuron is stimulated by the common dc current and an independent gaussian white noise [ see the 2nd and 3rd terms in eq .( [ eq : ciza ] ) ] satisfying and , where denotes the ensemble average .the noise is a parametric one which randomly perturbs the strength of the applied current , and its intensity is controlled by the parameter .the last term in eq .( [ eq : ciza ] ) represents the coupling of the network .each neuron is connected to all the other ones through global couplings via excitatory ampa synapses . of eq .( [ eq : cize ] ) represents such synaptic current injected into the neuron . herethe coupling strength is controlled by the parameter and is the synaptic reversal potential .we use mv for the excitatory synapse .the synaptic gate variable obeys the 1st order kinetics of eq .( [ eq : cizc ] ) . here, the normalized concentration of synaptic transmitters , activating the synapse , is assumed to be an instantaneous sigmoidal function of the membrane potential with a threshold in eq .( [ eq : cizf ] ) , where we set mv and mv .the transmitter release occurs only when the neuron emits a spike ( i.e. , its potential is larger than ) . for the excitatory glutamate synapse ( involving the ampa receptors ) , the synaptic channel opening rate ,corresponding to the inverse of the synaptic rise time , is , and the synaptic closing rate , which is the inverse of the synaptic decay time , is .here we consider the case of regular - spiking cortical excitatory neurons for , , , and .depending on the system parameters , the izhikevich neurons may exhibit either type - i or type - ii excitability ; for the case of type - i ( type - ii ) neurons , the firing frequency begins to increase from zero ( non - zero finite value ) when passes a threshold . for our case , a deterministic izhikevich neuron ( for ) exhibits a jump from a resting state ( denoted by solid line ) to a spiking state ( denoted by solid circles ) via a subcritical hopf bifurcation for by absorbing an unstable limit cycle born via a fold limit cycle bifurcation for , as shown in fig .[ fig : single](a ) .hence , the izhikevich neuron shows the type - ii excitability because it begins to fire with a non - zero frequency that is relatively insensitive to the change in . throughout this paper ,we consider a subthreshold case of .an isolated subthreshold izhikevich neuron can not fire spontaneously without noise .figures [ fig : single](b ) and [ fig : single](c ) show a time series of the membrane potential of a subthreshold neuron and the interspike interval histogram for . complex noise - induced subthreshold oscillations and spikings with irregular interspike intervals appear . population synchronization is investigated in an excitatory population of these subthreshold izhikevich neurons coupled via ampa synapses .hereafter , we fix the value of the noise intensity as . numerical integration of the governing equations ( [ eq : ciza])-([eq : cizc ] ) is done using the heun method ( with the time step ms ) .for each realization of the stochastic process in eqs .( [ eq : ciza])-([eq : cizc ] ) , we choose a random initial point $ ] for the neuron with uniform probability in the range of , , and . .solid line denotes a stable equilibrium point .maximum and minimum values of for the spiking state are represented by solid circles .( b ) time series of the membrane potential and ( c ) the interspike interval histogram in the single subthreshold izhikevich neuron for and . ]by varying the coupling strength , we investigate population synchronization , via which synchronous brain rhythms emerge , by using diverse techniques of statistical mechanics and nonlinear dynamics. emergence of population synchronization may be described by the population - averaged membrane potential ( corresponding to the global potential ) and the global recovery variable , figure [ fig : order](a ) shows rich phase portraits of the representative coherent and incoherent population states in the phase plane .population synchronization appears on noisy limit cycles for and , while incoherent states occur on noisy equilibrium points for , and .particularly , for population burst synchronization emerges on a noisy hedgehoglike limit cycle ; spines and body correspond to active and silent phases of the bursting activity , respectively .a schematic phase diagram of these population states on the axis is shown in fig . [fig : order](b ) . transitions between incoherent and coherent states may be well described in terms of the order parameter . for our case , the mean square deviation of the global potential ( i.e. , time - averaged fluctuations of ), plays the role of an order parameter , where the overbar represents the time averaging . here, we discard the first time steps of a stochastic trajectory as transients during ms , and then we numerically compute by following the stochastic trajectory for ms when and . for the coherent ( incoherent ) state ,the order parameter approaches a nonzero ( zero ) limit value in the thermodynamic limit of .figure [ fig : order](c ) shows a plot of the order parameter versus the coupling strength . for , the order parameter tends to zero as , and hence incoherent states exist .as passes the lower threshold , a coherent transition to spike synchronization occurs because the coupling stimulates coherence between noise - induced spikings .thus , spike synchronization appears for .however , when passing another threshold , individual neurons exhibit noise - induced burstings and population burst synchronization occurs . as is further increased and passes a threshold ) , a transition from bursting to fast spiking occurs in individual potentials and the burst synchronization breaks up because individual fast spikes keep no pace with each other .thus , for the order parameter tends to zero as , and incoherent states appear .however , with further increase in , coupling - induced fast spike synchronization occurs in a range of . for incoherent statesreappear as shown in fig .[ fig : order](c ) , and individual neurons exhibit slow spikings . as a final step , when passing a high threshold , coupling induces stochastic oscillator death ( i.e. , cessation of noise - induced slow spikings ) because each neuron is attracted to a noisy equilibrium state .this stochastic oscillator death leads to a transition from firing to non - firing state at the population level . in this way, three kinds of population synchronization ( i.e. , spike , burst , and fast spike synchronization ) emerge in the gray regions of figs .[ fig : order](b ) and [ fig : order](c ) . globally - coupled excitatory subthreshold izhikevich neurons for and . ( a )phase portraits of the population states in the plane for .( b ) schematic diagram of populations states on the axis ( c ) transition between coherence and incoherence : plots of versus for , , and .spike , burst , and fast spike synchronizations occur in the gray regions in ( b ) and ( c ) . ]we present population synchronization clearly in terms of the raster plots of spikes and the time series of the global potential .the first spike synchronization appears in a range of .an example for is shown in fig .[ fig : population](a ) . stripes ( composed of spikes ) , indicating population synchronization , appear regularly with the mean time interval ms ) in the raster plot , and shows a small - amplitude negative - potential population rhythm with frequency hz .the second burst synchronization occurs in a range of .figure [ fig : population](b1 ) shows bursting synchronization for .clear burst bands , composed of stripes of spikes , appear successively at nearly regular time intervals ms ) in the raster plot , and the corresponding global potential exhibits a large - amplitude bursting rhythm with hz .in contrast to spiking rhythm in fig . [fig : population](a ) , much more hyperpolarization occurs in the bursting rhythm . for a clear view, magnifications of a single burst band and are given in fig .[ fig : population](b2 ) . for this kind of burstings, burst synchronization refers to a temporal relationship between the active phase onset or offset times of bursting neurons , while spike synchronization characterizes a temporal relationship between spikes fired by different bursting neurons in their respective active phases .in addition to burst synchronization , spike synchronization also occurs in each burst band , as shown in fig .[ fig : population](b2 ) ; as we go from the onset to the offset times , wider stripes appear in the burst band .hence , this kind of burst synchronization occurs on a hedgehoglike limit cycle [ see fig . [fig : order](a ) ] , and exhibits bursting activity like individual potentials .finally , the third fast spike synchronization emerges in a range of .an example for is shown in fig .[ fig : population](c ) .in contrast to fig . [fig : population](a ) , stripes appear successively at short time intervals ms ) in the raster plot , and shows a small - amplitude positive - potential fast rhythm with hz . globally - coupled excitatory subthreshold izhikevich neurons for and .raster plots of spikes and time series of the global potential for ( a ) spike synchronization ( ) , ( b1 ) and ( b2 ) burst synchronization ( ) , and ( c ) fast spike synchronization ( ) . ] globally - coupled excitatory subthreshold izhikevich neurons for and .transition from spiking to bursting in individual potentials : time series of the membrane potential and the recovery variable of the 1st neuron for ( a1 ) 0.6 , ( a2 ) 0.75 , ( a3 ) 0.8 .plots of fraction ( number of spikes per burst ) versus n for ( b1 ) 0.75 and ( b2 ) 0.8 .average number of spikes per burst versus is shown in ( c ) .transition from bursting to fast spiking in individual potentials : time series of the membrane potential for ( d1 ) 6.06 , ( d2 ) 6.07 , and ( d3 ) 6.09 .inverse of average burst length , , versus is shown in ( e ) .transition from fast spiking to oscillator death in individual potentials : time series of the membrane potential for ( d1 ) 15 , ( d2 ) 18 , and ( d3 ) 20 .mean firing frequency versus is shown in ( g ) . ] with increasing , change in firing patterns of individual neurons and the corresponding population states are discussed . for small individual neurons exhibit noise - induced spikings .figure [ fig : od](a1 ) shows the time series of the membrane potential and the recovery variable of the 1st neuron for . here, the slow variable provides a negative feedback to the fast variable .spiking pushes outside the spiking area .then , slowly decays into the quiescent area [ see fig . [fig : od](a1 ) ] , which results in termination of spiking .this quiescent pushes outside the quiescent area ; then , revisits the spiking area , which leads to spiking of . through repetition of this process , spikings appear successively in , as shown in fig . [fig : od](a1 ) .population synchronization between these individual spikings appear for .however , as j passes a threshold , the coherent synaptic input into the first neuron becomes so strong that a tendency that a spike in can not push outside the spiking area occurs .as an example , see the case of j = 0.75 in fig .[ fig : od](a2 ) . for this case , both spikings ( singlets ) and burstings ( doublets consisting of two spikes ) appear , as shown in fig .4(a2 ) ; 69 percentage of firings are singlets , while 31 percentages of firings are doublets [ see fig . [fig : od](b1 ) ] . for , after the 2nd spike in , at first decreases a little ( with nearly zero slope ) and then increases abruptly up to a peak value of , which is larger than that of for .thus , after the 2nd spike , remains inside the spiking area ; hence , a third spike , constituting a doublet , appears in .after this 3rd spike , is pushed away from the spiking area and slowly decays into the quiescent area , which results in the termination of repetitive spikings . in this way, doublets appear in for j = 0.75 .as j is further increased , the coherent synaptic input becomes stronger , so the number of spikes in a burst increases [ e.g. , see the doublets and triplets for in fig .[ fig : od](a3 ) ] ; 88.7 percentage of firings are doublets , while 11.3 percentages of firings are triplets [ see fig . [fig : od](b2 ) ] .figure [ fig : od](c ) shows the average number of spikes per burst versus , and becomes larger than unity ( i.e. , burstings appear ) for .population synchronization between these burstings occurs for .with increase in , longer burst lengths ( i.e. , lengths of the active phase for the bursting activity ) appear as shown in figs .[ fig : od](d1 ) and [ fig : od](d2 ) , and eventually the average burst length , , diverges to the infinity ( i.e. , its inverse , , decreases to zero ) as goes to [ see fig . [fig : od](e ) ] . then , for individual neurons exhibit fast spikings as shown in fig .[ fig : od](d ) for .since these fast spikes keep no pace with each other , incoherent states appear as shown in fig .[ fig : order](c ) .however , as is further increased , the coupling induces fast spike synchronization in a range of .then , slow spikings with longer spiking phases appear , as shown in figs . [fig : od](f1 ) and [ fig : od](f2 ) .figure [ fig : od](g ) shows the mean firing frequency ( i.e. , the inverse of the average interspike interval ) versus the coupling strength .as approaches a threshold , goes to zero .consequently , for stochastic oscillator death ( i.e. , quenching of noise - induced slow spikings ) occurs [ e.g. , see fig .[ fig : od](f3 ) for . globally - coupled excitatory subthreshold izhikevich neurons for and .plots of the average population spike rate versus the coupling strength for , , and . ]the stochastic oscillator death of individual neurons leads to a transition from firing to non - firing states at the population level .this firing - nonfiring transition may be well described in terms of the average population spike rate which is a time average of the instantaneous population spike rate . to get a smooth instantaneous population spike rate , each spike in the raster plotis convoluted with a gaussian kernel function : where is the neuron index , is the spike of the neuron , is the total number of spikes for the neuron , is the total number of neurons , and the gaussian kernel of band width ( = 1 ms ) is given by here , we discard the first time steps of a stochastic trajectory as transients during ms , and then we numerically compute by following the stochastic trajectory for ms when and . for the firing ( non - firing ) state ,the average population spike rate approaches a non - zero ( zero ) limit value in the thermodynamic limit of .figure [ fig : nonfiring ] shows a plot of the average population spike rate versus the coupling strength .for , tends to zero as goes to the infinity , and hence non - firing states appear due to the stochastic oscillator death of individual neurons .we have studied coupling - induced population synchronization which may be used for efficient cognitive processing by changing the coupling strength in an excitatory population of subthreshold izhikevich neurons . as is increased, rich population states have appeared in the following order : incoherent state spike synchronization burst synchronization incoherent state fast spike synchronization incoherent state non - firing state .particularly , three types of population synchronization ( i.e. , spike , burst , and fast spike synchronization ) have been found to occur .transitions between population synchronization and incoherence have been well described in terms of a thermodynamic order parameter .these various transitions between population states have occurred due to emergence of the following diverse individual states : spiking bursting fast spiking slow spiking oscillator death .each population synchronization and individual state were well characterized by using the techniques of nonlinear dynamics such as the raster plot of spikes , the time series of membrane potentials , and the phase portrait . as a final step ,stochastic oscillator death ( cessation of individual noise - induced slow spikings ) occurred because each individual neuron is attracted to a noisy equilibrium state .this stochastic oscillator death leads to a transition from firing to non - firing states at the population level .the firing - nonfiring transition has also been characterized in terms of the average population spike rate .since the izhikevich model we employed for our study is a canonical model , we expect that our results are still valid in other neuronal models .finally , we note that population synchronization of noise - induced firings may lead to emergence of synchronous brain rhythms in a noisy environment which contribute to cognitive functions such as sensory perception , multisensory integration , selective attention , and working memory .1 abbott lf ( 1999 ) lapicque s introduction of the integrate - and - fire model ( 1907 ) . brain research bulletin 50:303 - 304 .baek sj , ott e ( 2004 ) onset of synchronization in systems of globally coupled chaotic maps .physical review e 69:066210 .b c , kopell n ( 2003 ) synchronization in networks of excitatory and inhibitory neurons with sparse , random connectivity .neural computation 15:509 - 538 .b c , kopell n. ( 2005 ) effects of noisy drive on rhythms in networks of excitatory and inhibitory neurons .neural computation 17:557 - 608 .buzs g ( 2006 ) rhythms of the brain .oxford university press , new york .coombes s , bressloff pc ( 2005 ) bursting : the genesis of rhythm in the nervous system .world scientific , singapore .golomb d , rinzel j ( 1994 ) clustering in globally coupled inhibitory neurons .physica d 72:259 - 282 .hodgkin al ( 1948 ) the local electric changes associated with repetitive action in a non - medullated axon .the journal of physiology 107:165 - 181 .hodgkin al , huxley af ( 1952 ) a quantitative description of membrane current and its application to conduction and excitation in nerve .the journal of physiology 117:500 - 544 .hong dg , kim sy , lim w ( 2011 ) effect of sparse random connectivity on the stochastic spiking coherence of inhibitory subthreshold neurons .journal of the korean physical society 59:2840 - 2846 .izhikevich em ( 2000 ) neural excitability , spiking and bursting. international journal of bifurcation and chaos 10:1171 - 1266 .izhikevich em ( 2003 ) simple model of spiking neurons .ieee transactions on neural networks 14:1569 - 1572 .izhikevich em ( 2004 ) which model to use for cortical spiking neurons ?ieee transactions on neural networks 15:1063 - 1070 .izhikevich em ( 2007 ) dynamical systems in neuroscience . mit press , cambridge .izhikevich em , edelman gm ( 2008 ) large - scale model of mammalian thalamocortical systems .proceedings of the national academy of sciences 105:3593 - 3598 .jiao x , wang r ( 2009 ) synchronous firing patterns of neuronal population with excitatory and inhibitory connections .international journal of nonlinear mechanics 45:647 - 651 .kim sy , hong dg , kim j , lim w ( 2012a ) inhibitory coherence in a heterogeneous population of subthreshold and suprathreshold type - i neurons .journal of physics a : mathematical and general 45:155102 .kim sy , kim y , hong dg , kim j , lim w ( 2012b ) stochastic bursting synchronization in a population of subthreshold izhikevich neurons .journal of the korean physical society 60:1441 - 1447 .lapicque l ( 1907 ) recherches quantitatives sur lexcitation electrique des nerfs traitee comme une polarization .journal de physiologie et pathologie general 9:620 - 635 .lim w , kim sy ( 2007 ) characterization of stochastic spiking coherence in coupled neurons .journal of the korean physical society 51:1427 - 1431 .lim w , kim sy ( 2008 ) stochastic oscillator death in globally coupled neural systems .journal of the korean physical society 52:1913 - 1917 .lim w , kim sy ( 2009a ) stochastic spiking coherence in coupled subthreshold morris - lecar neurons. international journal of modern physics b 23:703 - 710 .lim w , kim sy ( 2009b ) coupling - induced spiking coherence in coupled subthreshold neurons .international journal of modern physics b 23:2149 - 2157 .lim w , kim sy ( 2011 ) statistical - mechanical measure of stochastic spiking coherence in a population of inhibitory subthreshold neuron .journal of computational neuroscience 31:667 - 677 .liu y , wang r , zhang z , jiao x ( 2010 ) analysis on stability of neural network in the presence of inhibitory neurons .cognitive neurodynamics 4:61 - 68 .manrubia sc , mikhailov as , zanette dh ( 2004 ) emergence of dynamical order .world scientific , singapore .qu jy , wang rb , du y ( 2012 ) synchronization study in ring - like and grid - like neuronal networks . cognitive neurodyanmics 6:21 - 31 .rinzel j , ermentrout b ( 1998 ) analysis of neural excitability and oscillations . in : koch c ,segev i ( eds ) methods in neuronal modeling : from synapses to networks .mit press , cambridge , pp .251 - 292 .rubin je ( 2007 ) burst synchronization .scholarpedia 2:1666 .sakaguchi h ( 2000 ) phase transition in globally coupled rssler oscillators .physical review e 61:7212 - 7214 .san miguel m , toral r ( 2000 ) stochastic effects in physical systems . in : martinezj , tiemann r , tirapegui e ( eds ) instabilities and nonequilibrium structures vi .kluwer academic publisher , dordrecht , pp .shimazaki h , shinomoto s ( 2010 ) kernel bandwidth optimization in spike rate estimation .journal of computational neuroscience 29:171 - 182 .surez - vargas jj , gonzlez ja , stefanovska a , mcclintock pv ( 2009 ) diverse routes to oscillation death in a coupled - oscillator system .europhysics letters 85:38008 ; see references therein .topaj d , kye wh , pikovsky a ( 2001 ) transition to coherence in populations of coupled chaotic oscillators : a linear response approach .physical review letters 87:074101 .traub rd , whittington ma ( 2010 ) cortical oscillations in health and diseases .oxford university press , new york .wang h , wang q , lu q ( 2011 ) bursting oscillations , bifurcation and synchronization in neuronal systems .chaos solitons & fractals 44:667 - 675 .wang q , lu q , chen g ( 2008 ) .synchronization transition induced by synaptic delay in coupled fast - spiking neurons .international journal of bifurcation and chaos 18:1189 - 1198 .wang q , perc m , duan z , chen g ( 2010 ) impact of delays and rewiring on the dynamics of small - world neuronal networks with two types of coupling .physica a 389:3299 - 3306 .wang q , sanjun maf , chen g ( 2012 ) transition of phase locking modes in a minimal neuronal network .neurocomputing 81:60 - 66 .wang qy , zheng yh ( 2011 ) effects of information transmission delay and channel blocking on synchronization in scale - free hodgkin - huxley neuronal networks .acta mechanica sinica 27:1052 - 1058 .wang r , jiao x ( 2006 ) stochastic model and neural coding of large - scale neuronal population with variable coupling strength .neurocomputing 69:778 - 785 .wang r , zhang z , duan yb ( 2003 ) nonlinear stochastic models of neurons activities .neurocomputing 51:401 - 441 .wang r , zhang z ( 2011 ) phase synchronization motion and neural coding in dynamic transmission of neural information .ieee transactions on neural networks 22:1097 - 1106 .wang r , zhang z , tee ck ( 2009 ) neurodynamics analysis on transmission of brain information . applied mathematics and mechanics 30:1415 - 1428 .wang xj , buzs g ( 1996 ) gamma oscillations by synaptic inhibition in a hippocampal interneuronal network .the journal of neuroscience 16:6402 - 6413 .wang xj ( 2003 ) neural oscillation . in : nadel l ( ed ) encyclopedia of cognitive science .macmillan , london , pp .272 - 280 .wang xj ( 2010 ) neurophysiological and computational principles of cortical rhythms in cognition .physiological reviews 90:1195 - 1268 .wang y , chik dtw , wang zd ( 2000 ) coherence resonance and noise - induced synchronization in globally coupled hodgkin - huxley neurons .physical review e 61:740 - 746 .zhang x , wang r , zhang z ( 2010 ) dynamic phase synchronization characteristics of variable high - order coupled neuronal oscillator population .neurocomputing 73:2665 - 2670 .
we consider an excitatory population of subthreshold izhikevich neurons which exhibit noise - induced firings . by varying the coupling strength , we investigate population synchronization between the noise - induced firings which may be used for efficient cognitive processing such as sensory perception , multisensory binding , selective attention , and memory formation . as is increased , rich types of population synchronization ( e.g. , spike , burst , and fast spike synchronization ) are found to occur . transitions between population synchronization and incoherence are well described in terms of an order parameter . as a final step , the coupling induces oscillator death ( quenching of noise - induced spikings ) because each neuron is attracted to a noisy equilibrium state . the oscillator death leads to a transition from firing to non - firing states at the population level , which may be well described in terms of the time - averaged population spike rate . in addition to the statistical - mechanical analysis using and , each population and individual state are also characterized by using the techniques of nonlinear dynamics such as the raster plot of neural spikes , the time series of the membrane potential , and the phase portrait . we note that population synchronization of noise - induced firings may lead to emergence of synchronous brain rhythms in a noisy environment , associated with diverse cognitive functions .
thin plate and shell - like structures are ubiquitous in nature , arising in such instances as leaves and petals in plants to heart valves and epithelial tissues in organisms .they are also used in engineering and technological applications that range from flexible electronic circuits to prosthetic tissue engineered valves valves , to large scale mechanical and civil structures .a basic mathematical model that is used to describe and predict the mechanical behavior of these thin structures has its origins in elastic plate theory having been well - studied for over a century .most studies examine how the plate behaves in response to external and internal stimuli .these stimuli include not only forces applied to the surfaces and edges of the plate , but also more general effects such as thermal expansion , swelling , plastic deformation , and volumetric growth . while this forward problem remains a rich area of investigation in the mathematical , physical and engineering sciences , a natural question concerns the inverse problem of design how can we create optimal thin plate and shell - like structures for specific functions ?since shape is a precursor to function in many situations including the examples above , the simplest such inverse problem is that of asking how to shape a plate using boundary or bulk strains induced by external constraints or inhomogeneous growth .here we examine just this inverse problem : given a target shape that we want the plate to attain , how should the external or internal stimuli ( henceforth `` control variables '' ) be chosen so that the plate is deformed into the target shape ? early work on optimization of plate shape using boundary constraints includes studies focused on using normal traction on the plate surface to change its shape . however , while they were optimizing for a specific target configuration for the plate , their target was characterized by specifying both the normal displacement field and the airy stress function .this formulation is unnecessarily restrictive as a certain target _ shape _ for the deformed plate can be provided by many combinations of and the in - plane displacements , ( which are linked to ) along the boundary .recently , a new twist to this problem was added as a number of different groups have realized the ability to incorporate inelastic effects such as volumetric growth into elastic plate theory , a subject that has recently attracted much interest .one area of particular interest is the imposition of inhomogeneous growth strains .these give rise to residual stresses which are relieved by the plate s buckling out of plane .this can be seen at the edges of certain leaves and flowers , which can have a rippled configuration due to inhomogeneous growth .analogously , irreversible plastic deformation causes ripples at the edge of torn sheets of plastic .it is also possible to shape elastic plates made of gels and other polymeric materials that can swell by imbibing fluids . by blocking the ability of certain parts of the plate to swell or causing the plate to swell inhomogeneously ,it is then possible to cause the plate to assume a variety of different shapes . in particular , these inhomogenous strains and boundary conditions cause the plate to deform , primarily by bending out of the plane , since that mode of deformation is usually inexpensive . that this is indeed possible in a controllable waywas shown recently by analytically characterizing a class of in - plane volumetric growth that can transform an originally flat plate assuming certain symmetries in the shape , and then validated experimentally . here ,we complement and generalize this idea to the case of using either boundary and bulk forcing and either in - plane or out - of - plane growth that can lead to variations in the natural curvature , all of which can be inhomogeneous .our aim then is to find the growth strains so that this buckling and other growth - dependent deformations cause the reference plate shape to achieve a given target shape by balancing the requirements of closeness to the target while at the same time not having large inhomogeneities in the growth strain field ( which are typically hard to engineer in technology or control in biology ). our analysis will be more general than the specific instances outlined above in that we will develop a numerical method for arbitrary target shapes , and also consider not just in - plane growth but also active changes of curvature ( caused by growth which is greater at one side of the plate than the other ) .the equations for growing plates are described in section [ sect - growth - eqs ] , and the optimization process ( structurally similar to the work of jones and pereira that was started after this work was underway , but submitted earlier ) is explained in section [ sect - main - theory ] . following thiswe solve the system numerically for a general non - symmetric configuration ( section [ sect-2d ] ) , and for simplified one - dimensional ( section [ sect-1d ] ) and axisymmetric ( section [ sect - axi ] ) geometries .finally , in section [ sect - simple ] we use a semi - analytic approach on a circular disk to investigate how axisymmetric growth can give rise to so - called soft deformation modes .the equations governing volumetric growth in plates can be derived from one of two equivalent viewpoints : either by changing the definition of the reference metric , or by decomposing the strain tensors into growth and accommodation components . in the first approach, the reference metric of the plate is changed from its usual euclidean form to a prescribed non - euclidean metric .if a plate can be visualized as a collection of evenly - spaced points in a plane , connected by springs with a constant rest length , then imposing a non - euclidean metric is equivalent to changing the spring rest lengths in such a way that a stress - free planar configuration of the points is impossible .this is kinematically equivalent to imposing an inhomogeneous in - plane growth field .thus , a plate with an imposed non - euclidean metric will tend to buckle out of plane in order to minimize its stored energy as long as the applied strains are sufficiently large . a second approach to this problemis to consider the elastic growth process directly , and to derive the equation in the limit of small strain and small plate thickness .this leads to very similar equations albeit approached from different perspectives from a differential geometric perspective , and formal perturbation theory , , and bear deep similarities to the equations written down nearly half a century ago by mansfield for the thermoelastic deformations for plates . in all cases , the nonlinear growth in an elastic bodyis kinematically described by a multiplicative decomposition of the deformation gradient but in the plate limit the growth becomes an additive contribution to the strain fields . in this sectionwe will present the main equations , modified to account for varying plate thickness .growth is not the only phenomenon that can be described using this formalism ; both thermoelastic expansion and plastic deformation are also kinematically described ( especially in the small - thickness limit of plate theory ) by additive decompositions of the strain tensors .the difference between these three theories is of course in how the non - elastic part ( growth , thermal expansion , plastic deformation ) is described , and in how these effects alter the properties of the material ( including material density , stiffness tensors , and porosity ) .we will assume that the non - elastic parts of the strain tensor are small , so that these higher - order effects can be neglected . with this in mind, we define a plate using cartesian coordinates , with its deformation characterized by the in - plane displacements and out - of - plane displacement , where greek indices vary over .the growth in the plate may be characterized by the growth strains and , such that the in - plane strain and the change - of - curvature tensor may be additively decomposed into growth and accommodation components : and .this decomposition is valid if the strain fields remain small . in terms of displacementthe elastic accommodation strain tensors are thus given by and an index preceded by a comma indicates differentiation with respect to that coordinate .the elastic energy density is given by ( applying the summation convention ) , where and , and are the young s modulus , poisson ratio and thickness of the plate respectively .we scale the displacements and with , a typical lengthscale of the problem ; with ; the variable thickness with typical value , leading to typical values , for the stiffnesses .finally we define to be the dimensionless stiffness ratio .the dimensionless equations governing the plate deformation under the action of the growth strains and ( assuming no surface loading ) are the generalized fppl von krmn ( fvk ) equations : }+\frac{1}{2}[w , w]+\lambda^\mathrm{g}=0,\label{fvk1}\\ \beta\nabla^2{\left(h^3\nabla^2w\right)}-\beta(1-\nu)[h^3,w]-[w,\chi]+\beta\phi^\mathrm{g}=0.\label{fvk2}\end{aligned}\ ] ] in these expressions , =f_{,11}g_{,22}-2f_{,12}g_{,12}+f_{,22}g_{,11} ] is the gaussian curvature of the deformed surface .the fvk equations are solved with appropriate boundary conditions . if , are the components of the tangent and normal vectors to the plate edge , then the natural boundary conditions , corresponding to force - free and moment - free conditions , are where is the stress resultant tensor given in ( [ chi - definition ] ) , and is the ( dimensionless ) momentresultant tensor , . we will also be applying pinned boundary conditions , for which ( [ free - bcs-1])([free - bcs-2 ] ) are replaced by , .the fppl von krmn equations ( [ fvk1])([fvk2 ] ) outlined above do not involve the tangential displacement field , directly .thus an extra step would be needed to calculate and from before measuring the distance between the deformed plate and the target shape .an alternative to this approach is to write the system explicitly in terms of the three displacement components , , .we find it more convenient to write these equations in weak form , as they may be solved straightforwardly using finite elements .the fppl von krmn equations with growth were written in weak form by lewicka . however ,in their formulation the normal displacements are required to be twice differentiable .as we will be using linear finite elements , we modify the equations following reinhart , who treats the curvature as three new independent variables , with three additional weak - form equations to solve . in summary, the six equations to solve for the six variables , , , , , are shown below .quantities with a tilde are the variations ; the weak equations hold for all admissible ( once - differentiable ) values of these variations .}\,\mathrm{d}^2\boldsymbol{x}=0,\label{weakform1}\\ \fl{\int\!\!\!\int}_\omega{\left[{\frac{\partial { \widetilde}{v}_2}{\partial x}}n_{12}+{\frac{\partial { \widetilde}{v}_2}{\partial y}}n_{22}-{\widetilde}{v}_2q_2\right]}\,\mathrm{d}^2\boldsymbol{x}=0,\\ \fl{\int\!\!\!\int}_\omega{\left[{\widetilde}{\rho}_{11}(\rho_{11}+\nu\rho_{22})+{\frac{\partial { \widetilde}{\rho}_{11}}{\partial x}}{\frac{\partial w}{\partial x}}+\nu{\frac{\partial { \widetilde}{\rho}_{11}}{\partial y}}{\frac{\partial w}{\partial y}}\right]}\,\mathrm{d}^2\boldsymbol{x}\nonumber\\ = \oint_{\partial\omega}{\widetilde}{\rho}_{11}{\left({\frac{\partial w}{\partial x}}n_1+\nu{\frac{\partial w}{\partial y}}n_2\right)}\,\mathrm{d}s,\\ \fl{\int\!\!\!\int}_\omega{\left[2{\widetilde}{\rho}_{12}\rho_{12}+{\frac{\partial { \widetilde}{\rho}_{12}}{\partial x}}{\frac{\partial w}{\partial y}}+{\frac{\partial { \widetilde}{\rho}_{12}}{\partial y}}{\frac{\partial w}{\partial x}}\right]}\,\mathrm{d}^2\boldsymbol{x } = \oint_{\partial\omega}{\widetilde}{\rho}_{12}{\left({\frac{\partial w}{\partial x}}n_2+{\frac{\partial w}{\partial y}}n_1\right)}\,\mathrm{d}s,\\ \fl{\int\!\!\!\int}_\omega{\left[{\widetilde}{\rho}_{22}(\nu\rho_{11}+\rho_{22})+\nu{\frac{\partial { \widetilde}{\rho}_{22}}{\partial x}}{\frac{\partial w}{\partial x}}+{\frac{\partial { \widetilde}{\rho}_{22}}{\partial y}}{\frac{\partial w}{\partial y}}\right]}\,\mathrm{d}^2\boldsymbol{x}\nonumber\\ = \oint_{\partial\omega}{\widetilde}{\rho}_{22}{\left(\nu{\frac{\partial w}{\partial x}}n_1+{\frac{\partial w}{\partial y}}n_2\right)}\,\mathrm{d}s,\\ \fl{\int\!\!\!\int}_\omega\left[p{\widetilde}{w}+{\frac{\partial { \widetilde}{w}}{\partial x}}{\left(-{\frac{\partial w}{\partial x}}n_{11}-{\frac{\partial w}{\partial y}}n_{12}+{\frac{\partial m_{11}}{\partial x}}+{\frac{\partial m_{12}}{\partial y}}\right)}\right.\nonumber\\ + \left.{\frac{\partial { \widetilde}{w}}{\partial y}}{\left(-{\frac{\partial w}{\partial x}}n_{12}-{\frac{\partial w}{\partial y}}n_{22}+{\frac{\partial m_{12}}{\partial x}}+{\frac{\partial m_{22}}{\partial y}}\right)}\right]\,\mathrm{d}^2\boldsymbol{x}\nonumber\\ = \oint_{\partial\omega}{\left[{\frac{\partial { \widetilde}{w}}{\partial x}}(m_{11}n_1+m_{12}n_2)+{\frac{\partial { \widetilde}{w}}{\partial y}}(m_{12}n_1+m_{22}n_2)\right]}\,\mathrm{d}s.\label{weakform6}\end{aligned}\ ] ] in these expressions , },\\ m_{12}=\beta h^3(1-\nu)(\rho_{12}-\psi_{12}),\\ m_{22}=\beta h^3{\left[\nu(\rho_{11}-\psi_{11})+\rho_{22}-\psi_{22}\right]},\label{m22}\end{aligned}\ ] ] and we have included the normal and tangential surface tractions , and respectively , for completeness . is the domain of the undeformed plate . on solving equations ( [ weakform1])([weakform6 ] ) in the space of once - differentiable functions ,the natural boundary conditions are the free boundary conditions ( [ free - bcs-1])([free - bcs-3 ] ) . for pinned boundary conditionsthe space of admissible functions must in addition specify that on the plate boundary . for clamped boundary conditions ( for instance, in the example provided in the introduction ) the right - hand sides of all six equations must be set to zero , in order to impose on the boundary without specifying ( [ free - bcs-3 ] ) there also .while the previous section allows one to calculate the plate displacements subject to certain stimuli ( growth fields , surface tractions , edge displacements ) , the key calculation from our viewpoint is to find what form of stimulus will give a desired property of the displacement field .abstractly , we denote the stimuli as control variables and the displacement and curvatures as state variables .then the condition on the displacement field can be written as a minimization of a certain functional of the state variables .thus we obtain a pde - constrained optimization problem : for .the equations are the constraints , which comprise the fvk equations ( [ weakform1])([weakform6 ] ) .the problem ( [ pdeconopt ] ) is ill - posed , since there will be many combinations of and that minimize , and non - smooth solutions are often the most accessible to numerical methods .thus a regularization term must be added to , so that some property of the control variables is minimized .tikhonov regularization is a commonly - encountered example of this method .the optimization problem becomes }\qquad\textrm{subject to } c_i(\boldsymbol{u},\boldsymbol{d})=0\end{aligned}\ ] ] for .the parameter is chosen as a trade - off between numerical well - posedness and adherence to the requirement that the target displacement be met .as an example of the situation that we envisage , consider a flat plate of arbitrary shape .the edges of the plate are clamped and are allowed to be displaced in - plane . in this situationthe inverse problem to be solved is how to choose these edge displacements so that the interior is deformed into a given configuration .for example , consider a circular plate of radius .how should the clamped edges be deformed so that the center point of the plate attains a given vertical displacement , ?the theory of section [ sect - main - theory ] can be used to solve this problem and other plate optimization problems with certain modifications .the constraints to the problem are the fvk equations ( [ weakform1])([weakform6 ] ) , with and .the boundary conditions are clamped , so the right - hand sides of ( [ weakform1])([weakform6 ] ) are set to zero . at the boundarywe impose and ( and ) , where are the prescribed edge displacements , used as control variables .finally we must specify an objective function .an appropriate form is but as we have seen , the problem is ill - posed without a regularization term . for this problemwe set }\mathrm{d}s.\ ] ] then the problem is solved by minimizing subject to the fvk equations with zero growth and clamped boundary conditions by varying the state variables , and the control variables . in figure [ circle_bump ] we display the results of this optimization calculation for . at the plate center achieved by solving for the plate edge displacements .plate thickness , poisson ratio and regularization parameter .,scaledwidth=70.0% ] we will now formulate an optimization problem for the growing plate in other words , to determine the optimal growth strains that allow the plate to achieve a given target shape . we propose that the optimal solution should minimize the functional , with the regularization parameters to be introduced later . in this functional , is a measure of the distance between the deformed plate and the target shape and is a regularization term which has the effect of smoothing the growth fields . in general the solutionwill therefore comprise a balance between closeness to the target shape , and spatial smoothness of the growth fields .the frchet distance and hausdorff distance are general measures of the distance between two surfaces in three dimensions . however ,if the target is known as an analytic function , simpler formulations are possible .if the target shape and plate deformations are axisymmetric or otherwise one - dimensional , we can make use of the following scaled arclength implementation .consider a 1d plate of length , with a target shape for .then the parametric definitions of the curves traced out by the deformed plate ( under in - plane displacement and normal displacement ) and the target shape are respectively the arclengths of the curves are then }^{1/2 } \mathrm{d}\bar{x},&\qquad s_\mathrm{max}=s(1),\label{eq : sx}\\ s(x)&=\int_0^x{\left[1+f'(\bar{x})^2\right]}^{1/2}\mathrm{d}\bar{x},&\qquad s_\mathrm{max}=s(x_\mathrm{max})\label{eq : sx}\end{aligned}\ ] ] respectively .these can be inverted to give , , and thus the deformed and target shapes parametrized by arclength : we can then define a distance function by scaling and to provide a correspondence between these two parametrizations : let }^2\nonumber\\ + { \left[w({\widetilde}{x}(s_\mathrm{max}\sigma))-f({\widetilde}{x}(s_\mathrm{max}\sigma))\right]}^2\qquad\textrm{for } \sigma\in(0,1),\label{arclength - distance}\end{aligned}\ ] ] and define for some tunable parameter . in the more general two - dimensional case ,if the target shape is given as an elevation _ i.e. _ in eulerian components then we may write the distance between the deformed plate and the target as and minimize .note however that we must impose an additional constraint that the boundary of the undeformed plate must be mapped to the boundary of the target shape .this can be achieved by adding a term to the objective function , which is a measure of the distance between these two boundaries and can be calculated using the arclength method described above . specifically ,if are the parametric representations of the undeformed , deformed , and target boundaries respectively , then by analogy with ( [ eq : sx ] ) , invert these to give and , and thus the deformed and target shapes parametrized by arclength : then }^2 + { \left[y_\mathrm{b}^\mathrm{d}(\theta_s(s_\mathrm{max}\sigma))-y_\mathrm{b}(\theta_s(s_\mathrm{max}\sigma))\right]}^2\right.\nonumber\\ \left . + { \left[z_\mathrm{b}^\mathrm{d}(\theta_s(s_\mathrm{max}\sigma))-z_\mathrm{b}(\theta_s(s_\mathrm{max}\sigma))\right]}^2\right\}\mathrm{d}\sigma,\label{ee}\end{aligned}\ ] ] and .the regularization term noted earlier is given by }\,\mathrm{d}^2\boldsymbol{x},\ ] ] where and are tunable parameters .this objective function embodies the principle that the gradients of the growth strains in the optimal solution should be as small as possible .( for isotropic growth the regularizing term becomes . )one practical reason for this restriction on the growth strains is that if we were to experimentally verify the solutions obtained by the optimization process , we would want the solution to be as insensitive as possible to manufacturing errors , which would be hard to achieve if and varied rapidly across the undeformed plate .the minimization of will be subject to the constraint that the control variables , and state variables , , and satisfy the modified fppl von krmn equations ( [ weakform1])([weakform6 ] ) , in the appropriate function spaces ( surface tractions , are set to zero ) . thus the optimization problem can be stated as follows : }\label{nondim - objective}\\ \textrm{subject to the fvk equations ( \ref{weakform1})--(\ref{weakform6}).}\nonumber\end{aligned}\ ] ] in section [ sect-2d ] we will outline some numerical solutions of the optimization problem ( [ nondim - objective ] ) , first in its full two - dimensional implementation , followed by simplified one - dimensional situations , namely a beam and an axisymmetric target shape . following thiswe will discuss a semi - analytic approach , where growth leading to simple target shapes can give rise to soft deformation modes .to solve ( [ nondim - objective ] ) , we need to discretize the variables . to this end , the space of admissible solutions to ( [ weakform1])([weakform6 ] ) is approximated by the space of piecewise affine functions , and the domain is triangulated ( for our calculations we used the distmesh routine ) . to simplify calculations in this section ,the domain is a circle of radius , the thickness is set to and growth is isotropic ( , ) .the control variables , and the state variables , , , , , are all set to be piecewise affine over each triangle element , so that the function values at each node of the triangulation become the discrete variables to be solved for , as in the standard linear finite element approach .we used the sparse sqp solver ` e04vh ` of the nag toolbox , based on the software package snopt .this algorithm is well suited to such discrete numerical nonlinear optimization problems , and may be accessed through an interface to the numerical analysis package matlab . for a more thorough overview of the numerical procedure , refer to [ proc - appendix ] . in figure[ monkey ] , we plot the result for a monkey saddle target shape , which has an elevation profile of , and , , , , , . we see clearly that the dominant factor in the solution is , which is an order of magnitude greater than . furthermore, is positive at the boundary of the disk but negative in the interior .this result tallies with previous results which predict that excess growth at the boundary of the disk will cause ripples there , since the residual stress caused by the growth is relieved by buckling out of the plane . , and ( b ) active curvature change in the solution .the target shape is shown in ( c).,scaledwidth=100.0% ]we can gain a greater understanding of the optimization results by considering a simplified geometry .the first example we present is of one - dimensional growth in a beam , where we set and assume all quantities are independent of the cartesian coordinate .we imposed a target shape , , and considered two sets of boundary conditions . in the first case both sides are pinned : the displacements are fixed and a zero moment is applied . in the second case the tractions and moments at the edgesare set to zero .the right - hand side is at because the distances have been nondimensionalized . due to the one - dimensional nature of the beam , is well - defined , and hence so is the objective function in ( [ nondim - objective ] ) .the fvk equations ( [ weakform1])([weakform6 ] ) are imposed as constraints with .however , the simplified geometry allows us to reduce the problem to solving for and as piecewise affine ( linear ) functions over the domain , through the weak form equations }\,\mathrm{d}x=0,\end{aligned}\ ] ] solved for all admissible variations , .the normal displacement is found by integrating .free boundary conditions are the natural boundary conditions while pinned boundary conditions are set by the imposition of the additional constraint that .we performed sample calculations for , and for , both ranging over to .graphs of the objective function as a function of and are displayed in figure [ 1d - results](a , b ) for both pinned and free boundary conditions .we can see that as both and increase relative to , so does the objective function .the distributions of and over are displayed in figure [ 1d - results](c , d ) , for pinned and free conditions respectively .we choose the representative values of , , to enforce the condition .the reason for this choice is firstly to ensure that matching to the target shape is given the most weight , and secondly to penalize changes in more than changes in , since we speculate that it is simpler to experimentally control than .the main difference between the solutions using different boundary conditions is that both the growth strains and are larger if the edges are free .this is because in the pinned case , the plate can leverage the fixed displacements at the edges to buckle out of plane into the target shape , whereas with the free boundary condition the structure does not have this freedom ( at least in one dimension ) and must actively bend through to achieve the shape . indeed , solving the fvk equations directly with the calculated solution in figure [ 1d - results](c ) yields a bistable configuration characteristic of buckling : the plate can achieve both the target shape and an inverted solution , much like an euler column ( although in this case the two states have different energies , due to the asymmetry introduced through ) .this bistability is absent on using the solution in [ 1d - results](d ) . .plots ( b , c ) : surface plots of the scaled objective function as a function of the scaled parameters and , for ( b ) pinned and ( c ) free boundary conditions . plots ( d , e ) : distributions of ( ) and ( ) over , for ( d ) pinned and ( e ) free boundary conditions.,scaledwidth=100.0% ]for axisymmetric target shapes , all quantities are presupposed to depend on the radial coordinate only .beginning with a flat disk of radius ( in dimensionless coordinates ) , we apply an isotropic growth field , .zero - traction conditions are applied on the outer rim of the disk .given our experience of one - dimensional growth , we would thus expect to play a greater role than in shaping the plate. we will also , however , repeat the calculations while holding to see if the shapes are attainable through changes in metric alone .the target shape is achieved by minimizing as before ; the arclength functional ( [ arclength - distance ] ) is used , using the cross - section of the deformed plate along the meridian , without loss of generality .we perform calculations for two separate target shapes , which are displayed in figure [ axi - results](c , d ) : for .the gaussian curvature of a surface defined by can be shown to be .as such , profile 1 has a positive gaussian curvature at all points , while the other profile consists of a central region of positive gaussian curvature surrounded by a region of negative gaussian curvature .( ) and ( ) for profiles 1 ( c , e ) and 2 ( d , f ) , with . is allowed to vary in plots ( c , d ) ; is set to zero for plots ( e , f ) .inset : legend for plots ( c)(f).,scaledwidth=100.0% ] as in section [ sect-1d ] , there is a simplified weak form system for the solution of such axisymmetric problems .where , we solve the following for all admissible variations , : \,\mathrm{d}r=0,\\ \fl\int_0 ^ 1\left[\beta r{\frac{\mathrm{d } { \widetilde}{u}}{\mathrm{d } r}}{\left({\frac{\mathrm{d } u}{\mathrm{d } r}}+\frac{\nu u}{r}-(1+\nu)\psi\right)}+\beta{\widetilde}{u}{\left(\nu{\frac{\mathrm{d } u}{\mathrm{d } r}}+\frac{u}{r}-(1+\nu)\psi\right)}\right.\nonumber\\ \left.+r{\widetilde}{u}u{\left({\frac{\mathrm{d } v}{\mathrm{d } r}}+\frac{\nu v}{r}+\frac{u^2}{2}-(1+\nu)\gamma\right)}\right]\,\mathrm{d}r=0.\end{aligned}\ ] ] the distributions of and for the two profiles are displayed in figure [ axi - results ] , allowing to vary ( c , d ) and setting it to zero ( e , f ) . in each case , .we can clearly see that increasing makes the distributions of and smoother , and this is particularly noticeable when we impose .the greatest difference between the solutions with and without the assumption , is that if then the solutions are almost entirely due to a constant field : as we had predicted , the free boundary conditions mean that the plate needs to actively bend to the desired shape .it is interesting to compare the constant results for both profile shapes . for the paraboloidal profile 1 ,the change of curvature term is positive , while for profile 2 it becomes negative .we would expect the negative constant to also give a paraboloidal shape , but it transpires that this state is bistable : a mechanical eversion gives rise to the desired profile 2 . on the other hand , if is set to zero , then the negative gaussian curvature at the rim of profile 2 is introduced by increasing the growth strains here .liang and mahadevan analyzed modified versions of the equations ( [ fvk1])([fvk2 ] ) in order to demonstrate how a blooming flower can be regarded as a mechanical phenomenon caused by buckling due to differential growth strains .this analysis was enabled by analyzing a simplified shell geometry considered representative of the actual petal shape .mansfield also investigated this system a circular plate with zero and constant isotropic due to an applied temperature gradient and showed that initially the deformed plate was a spherical cap .however , at a certain critical value of , this solution became unstable and bifurcated to a nonsymmetric shape similar to a section of a cylinder .this result illustrates the phenomenon of a soft mode , or a zero - stiffness deformation mode .specifically , while the deformation field is nonaxisymmetric , the underlying mechanical properties of the material ( undeformed shape , stiffness , growth fields ) are independent of angle ( _ i.e. _ axisymmetric ) .thus the same non - axisymmetric deformation , rotated by an arbitrary angle , is also a solution of the system , with the same stored energy .this one - parameter family of solutions is known as a soft mode .the ability of such structures to change shape without the requirement of large energy input has given them both theoretical and practical importance , with applications ranging from actuators to deployable structures .mansfield s bifurcation was reproduced experimentally by lee , where a flat disc comprising two layers of unequal thermal expansion coefficient was heated , corresponding to the imposition of a constant field was imposed . under a large enough temperature ,the initially axisymmetric shape buckled to mansfield s nonaxisymmetric soft mode .other soft modes have also been developed experimentally , notably by guest , who created a zero - stiffness elastic shell by plastically deforming a metallic plate to a shell with a cylindrical geometry .taking mansfield s work as our starting point , we will simplify the normal displacements and growth functions to be quadratic functions of position , and use our optimization technique to solve for the coefficients of these functions , rather than for their full pointwise distribution .we will show that the near - cylindrical geometry of mansfield is not the only soft mode achievable by the application of axisymmetric growth functions .these solutions are a special case of the solutions found by seffen and maurini ; our results emphasize the neutrally - stable nature of the deformations .the first difficulty one encounters when performing an analysis on such a simplified deformation ansatz is that the boundary conditions will not , in general , be satisfied . to remedy thiswe must assume a specific form for the variable thickness .in particular , if the plate is circular , with radius , set the thickness to be .because of the dependence of the bending and stretching stiffnesses and on , we find that the in - plane stress resultants and moment resultants tend to zero as , so that the boundary conditions are now automatically satisfied . additionally , with simple forms of the dependent variables , a solution may be found to the fvk equations ( [ fvk1])([fvk2 ] ) .for instance , for a circular plate of ( dimensionless ) radius , set where we have assumed isotropic growth .we have hereby reduced the problem to determining the seven constants , , , , , , and by minimizing the objective function subject to the fvk equation constraints . considering the constraints first ,the stress - free boundary conditions for this system are satisfied automatically . on substituting ( [ ansatz1])([ansatz3 ] ) into the fvk equations ( [ fvk1])([fvk2 ] ) ,we obtain the following relations between the coefficients : the remaining three degrees of freedom are set by minimizing the objective function . since , and hence .however , for this application the smoothness of is not relevant and we set , so that . to calculate , we need the full displacement field , including the in - plane displacements in the radial direction and in the circumferential direction .these are given by the expression for does not lend itself well to a simple distance function which may be integrated over the area of the circular plate . however , we may approximate a distance function by calculating the arclength distance measure ( [ arclength - distance ] ) for ( which is where ) , and summing the results : we can now state the optimization problem for this simplified formulation ( f1 ) : choose , , , , , that minimize , subject to equations ( [ semianalytic - constr-1])([semianalytic - constr-3 ] ) .we will now illustrate this method by considering the growth patterns required to transform the circular plate to the targets outlined earlier .we consider an axisymmetric profile , a cylindrical profile ( as an approximation to mansfield s bifurcated solution ) and a saddle geometry . substituting these target shapes into the optimization procedure will output the values of the constants . however , by exploiting symmetry to write in terms of , we can find , and in terms of from ( [ semianalytic - constr-1])([semianalytic - constr-3 ] ) , and then and are calculated by minimizing the distance functional .for the paraboloid of revolution , and equation ( [ semianalytic - constr-2 ] ) is automatically satisfied .the optimization thus has an extra degree of freedom .however , the solution obtained has much greater than both and , making it comparable with mansfield s original solution with .in fact , setting we obtain his result exactly : however this result , as noted previously , becomes unstable when .figure [ mansfield - bifurcation ] displays the bifurcation diagram for the parameters , as varies. increases past a critical value , the axisymmetric solution ( ) becomes unstable , and a solution with emerges .plots are for and .,scaledwidth=60.0% ] for those cases where the -only solution is unstable , we can still find a paraboloidal solution by setting ; by subsequently solving ( [ semianalytic - constr-1])([semianalytic - constr-3 ] ) we obtain . in summary : in both cases , and are found by minimizing .note the similarity between these results and those of section [ sect - axi ] , where a paraboloidal bowl was found for , or for and an in - plane growth which may be approximated by , as here . for a cylindrical target shape , ( as opposed to mansfield s bifurcated solution , which had both and positive ) .we can achieve this shape by solving ( [ semianalytic - constr-1])([semianalytic - constr-3 ] ) to give with and again solved for by optimization of the distance functional .finally a saddle shape where can be found in the same manner : this time plots of all three deformed plates can be seen in figure [ sa - plots ] . , , where the coefficients are chosen to satisfy the minimization problem ( f1 ) . the target shape and specific expressions for , given as follows : ( a ) paraboloidal target , , equations ( [ g2-p0-bowl ] ) ; ( b ) cylindrical target , , equations ( [ g2-p0-fold ] ) ; ( c ) saddle - shaped target , , equations ( [ g2-p0-pringle]).,scaledwidth=80.0% ]in this article we have outlined a new approach to determining the optimal distribution of growth stresses that transform a flat plate into a specified target shape . not only have we calculated the solution for non - symmetric and for simplified one - dimensional geometries ( sections [ sect-2d][sect - axi ] ) , but qualitative results have been obtained using a semi - analytic approach ( section [ sect - simple ] ) , and have been used to show that an axisymmetric growth pattern can be used to produce a structure which exhibits soft mode deformations .possible extensions to this theory include curved initial geometries ( shells ) , the relaxation of the small - growth - strain assumptions ( leading to more strongly nonlinear equations ) , and the use of different control variables , such as edge displacements or surface tractions .in any case we believe that this approach will prove useful for researchers who wish to engineer plate deformations into a desired shape .the authors would like to acknowledge funding from the harvard national science foundation materials research science and engineering center , the wyss institute for biologically inspired engineering , and the kavli institute for bionano science and technology .here we outline the solution procedure for the problem described in section [ sect-2d ] . for this ,the equations ( [ weakform1])([weakform6 ] ) require discretization .the state variables , , , , , and control variables , are defined in terms of their values at points forming the nodes of a triangulation of the domain .the triangulation enables the generation of basis functions for each node , so that ( for instance ) the out - of - plane displacement is approximated by .this allows the six weak form pdes ( [ weakform1])([weakform6 ] ) to be rewritten as algebraic equations in terms of the nodal values of the variables . *computational procedure : * 1 .express the outline of the initial ungrown plate as a parametric representation .2 . express a target surface for the grown plate , together with a target boundary .3 . calculate from ( [ xbybzb ] ) for a fine mesh of $ ] .4 . use to find a triangulation of the source domain .5 . for each node in the triangulation , calculate the basis functions .initialize the state and control variables to be zero at each node .* main solution routine .* the optimization routine ` e04vh ` calculates the optimal ( ) such that is minimized subject to for . in our case the number of equations is , and the number of variables is . 1 .* limits : * set for each , , and for each .* subroutine : * calculate given input vector is the concatenation of the values of , , , , , , , and at each node in the triangulation .2 . use the triangulation geometry and basis functions to calculate the gradients of each of these variables in each triangle ( by construction , they will be piecewise constant in each triangle ) .3 . calculate the value of at each node in the triangulation , and use this to calculate .4 . find the boundary of the deformed mesh , and calculate for each point corresponding to a boundary node .use this together with the previously calculated for these to calculate from ( [ ee ] ) .5 . use the gradients of and to calculate .combine the previous three integrals to calculate the objective function , and set to be this value .calculate the discretized weak form equations , and set these to be the constraints .output state and control variables and plot results .10 love a e h 1927 _ a treatise on the mathematical theory of elasticity _4th ed ( cambridge : cambridge university press )
a flat plate can bend into a curved surface if it experiences an inhomogeneous growth field . in this article a method is described that numerically determines the optimal growth field giving rise to an arbitrary target shape , optimizing for closeness to the target shape and for growth field smoothness . numerical solutions are presented , for the full non - symmetric case as well as for simplified one - dimensional and axisymmetric geometries . this system can also be solved semi - analytically by positing an ansatz for the deformation and growth fields in a circular disk with given thickness profile . paraboloidal , cylindrical and saddle - shaped target shapes are presented as examples , of which the last two exemplify a soft mode arising from a non - axisymmetric deformation of a structure with axisymmetric material properties .
a classic problem in the field of hydraulics is determining the distribution of flow rates and pressures inside a given piping network for fixed inlet conditions .many practical fluid networks such as municipal water delivery have turbulent flow and thus a nonlinear resistance making their analytical solution difficult . in 1936 , a structural engineer named hardy cross revolutionized the analysis of hydraulic networks by developing a systematic iterative method by which one could reliably solve nonlinear network problems by hand calculation . while analysis of such hydraulic networks is now considered routine with computer techniques , the problemcan once again become intractable if one considers networks filled with a fluid comprised of multiple phases or constituents .analysis of such networks is of interest because in a number of application it has been observed that the phase distribution within the network may exhibit unsteady or non - unique flow .such heterogeneous distribution of phase within network flows has been studied at a variety of scales . at the micro - scale, the flow of droplets or bubbles through microfluidic networks can demonstrate bistabilty and spontaneous oscillations .these nonlinearities have been exploited by researchers who have demonstrated microfluidic memory , logic , and control devices . on the macro - scale ,models of magma flow with either temperature - dependent viscosity or volatile - dependent viscosity have shown the existence of multiple solutions on the pressure - flow curve which can lead to spontaneous oscillations .another network that can exhibit complex behavior is microvascular blood flow .nobel prize winner august krogh noted the heterogeneity of blood flow in the webbed feet of frogs in the early 1920 s . in the _ anatomy and physiology of capillaries _ he wrote _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in single capillaries the flow may become retarded or accelerated from no visible cause ; in capillary anastomoses the direction of flow may change from time to time ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ numerous researchers have confirmed these observations over the years . the heterogeneous distribution of red blood cells in microvascular blood flow is often interpreted as evidence of biological control . if the flow in a branch increases , it is assumed that the diameter of the branch responds in order to auto - regulate the flow .vasomotion has often been assumed to be the cause for oscillations in the micro - circulation . while the importance of vasomotion can not be denied ,there is significant evidence that fluctuations in cell distributions in microvascular networks can be due to inherent instabilities .there are two fundamental phenomena in two - phase flow networks which differ from their single phase counterparts and lead to complicated behavior .the first effect is that the effective viscosity or flow resistance in a single pipe is often a nonlinear function of the fraction of the different fluids in the pipe .the second effect is that in two fluid systems , it is commonly observed that the phase fraction after a diverging node is different in the two downstream branches . in 2007we ( jbg and njk ) proved that if the viscosity is a nonlinear function of fluid fraction then multiple stable equilibrium states may exist .further we proved that both phase separation at a node and nonlinear viscosity can lead to the emergence of spontaneous oscillations . in recent experiments, we ( jbg and bds ) have demonstrated some of these predictions experimentally in simple networks involving two newtonian fluids of different viscosity . in one set of experiments we demonstrated bistability via nonlinear resistance and in the other bistability via phase separation .these experiments showed that multiple equilibria in networks is possible without fluids with complex rheology .while our experiments involve simple fluids in a controlled laboratory setting , it is expected that these results may be generalized and found in numerous natural and man - made systems .phase separation at a single node exists in numerous fluid systems and has been widely studied in different contexts . in microvascular blood flow ,krogh introduced the term `` plasma skimming '' in order to explain the disproportionate distribution of red blood cells observed at single branch bifurcations _ in vivo _numerous authors have demonstrated plasma skimming _ in vitro _ and _ in vivo _ and developed simple empirical models to describe the effect .another widely studied example of phase distribution at a single node is gas - liquid two - phase flow which has important technological applications in power and process industries . inmany process applications phase maldistribution can have detrimental consequences for downstream equipment , while in some cases the phenomenon is exploited to build simple phase separators .extensive experimental work on gas - liquid flow has been conducted over the past 50 years . in applications for the process and petroleum industry , phase separation in liquid - liquid flowsare less well - studied though several recent papers have emerged .the impact of phase maldistribution in two - phase flow has been shown to impact network flows in refrigeration systems and solar power systems . while the behavior at a single node has been well - studied experimentally in the applications noted above , systematic analysis of networks with two - phase flowhave received less attention .the most widely studied network is the microvascular one for which the first modeling effort for dynamics was conducted by kiani _et al . _ . in 1994they conducted a direct simulation of 400 vessels and found oscillations in the flow . in 2000 carr and lecoin found oscillations in networks with fifteen vessels .they found evidence of hopf bifurcations and limit cycles , but were unable to determine which parameters controlled the dynamics . in an attempt to understand the parameters that lead to spontaneous oscillations in microvascular flows , geddes _performed a complete analysis of the flow - driven 2-node network ( one inlet , a loop , and one outlet ) in 2007 .while this network can exhibit oscillations in theory , they do not exist for realistic physical parameters .several other groups have since studied the problem of oscillations in microvascular networks , and a coherent picture is beginning to emerge . in the context of microvascular flowwe now know that networks with 2 vessels can exhibit spontaneous oscillations for unrealistic physical parameters while networks with 15 vessels can oscillate for realistic parameters .it is unknown at what level of network complexity oscillations can emerge and what parameters govern their existence .while the 2-node network has been fully characterized theoretically , full descriptions of more complicated networks becomes difficult . while we have studied the equilibrium properties of the 3-node network ( two inlets , a loop , and one outlet ) theoretically and experimentally in prior work , we had no systematic method to understand the stability other than through direct simulation . while we did not predict the existence of oscillations for the parameters relevant to our experiments , with no systematic method to analyze the stability and the large number of parameters it is impossible to rule out the emergence of spontaneous oscillations . in this paperwe develop a methodology for finding and tracking hopf bifurcations through continuation .this development is critical due to the large parameter space of the problem .we find that our analytical methods are in perfect agreement with direct numerical simulations , validating the methodology .using our methods we develop phase diagrams that show a rich set of dynamics including multiple frequency oscillations and co - existing limit cycles .the details of these predictions depend sensitively on the constitutive laws for the fluids in the network and the phase separation at a single diverging node .however , our methodology is general and may be applied to any two - phase flow network system .the physical setup is shown in figure [ fig : schematic ] .the network has two flow controlled inlets , each of which contains a fluid comprised of two separate phases , and .the two phases are two fluids which have different viscosities and remain distinct at least up to the inlet of the network . without loss of generality ,we assume that is the more viscous fluid .locally at a point along the tube we define the local volume fraction as , where is the volumetric flow rate of each phase . in an experiment the volume fraction in the two inlets would be set upstream by flow controlled pumps attached to reservoirs of fluids and . in the case ofblood flow the phase is plasma , the phase is red blood cells and the volume fraction is the hematocrit . while blood is not really comprised of two continuous fluid phases , such a model is commonly used in numerical simulations or laboratory experiments . and at controlled flow rates and . while the flow in vessels and is always from left to right in this figure , the flow in vessel can be up or down depending on the state of the network , with representing downward flow and representing upward flow ., width=207 ] the basic network model based on fundamental conservation principles will be developed in the next section .however , to close the model we require two constitutive laws which depend on the details of the fluid system and the network geometry ; i ) the effective viscosity as a function of volume fraction and ii ) the phase separation rule for a single node .the details encoded in these two constitutive laws play a dramatic role in the eventual behavior of the network .we assume laminar flow in cylindrical tubes where the hydraulic resistance is proportional to the viscosity of the fluid mixture . since we have have a two - phase flow we can compute an effective viscosity , , which is a function of not only the two fluids involved but their geometrical arrangement in the tube . the effective viscosity can be expressed in terms of the viscosity of the less - viscous phase , , and a relative viscosity ; .simple newtonian fluids approximately follow a nonlinear arrhenhius law when they are well mixed , here and are the viscosities of the individual phases , and is the viscosity contrast .different viscosity laws exist for different physical manifestations rather than complete mixing . for newtonian fluids that remain stratified in a circular tube as separate phases , the relative viscosity follows a relationship which can be readily computed though no simple analytical form exists .another common physical configuration is a core annular flow where the viscous fluid assumes a cylindrical core which is lubricated by an annulus of less viscous fluid in a cylindrical tube . for the example of microvascular blood flowthe rheology is more complicated , however pries _et al . _ compiled a database of viscosity measurements in tubes with a range of diameters and hematocrits . while the exact form of the above viscosity laws all differ , the important fact is that they are all nonlinear functions of the volume fraction which is a key feature for networks to exhibit multiple equilibrium states and spontaneous oscillations . throughout this workwe will assume for convenience that the effective viscosity is determined by equation [ eq : viscosity ] .the phase separation rule for each node is a complex function which depends sensitively on the fluid system , the node geometry , and the inlet flow rate .the phase separation rule relates the downstream volume fractions in two daughter branches to the inlet flow state . for this workwe explore the consequences of two different separation functions which are valid for 1 ) microvascular blood flow and 2 ) stratified laminar flow of two newtonian fluids . for microvascular blood flow, numerous authors have demonstrated this separation of red blood cells from plasma ( _ i.e. _ , plasma skimming ) and developed simple empirical models to describe the effect .these empirical relations become part of the network model . in our previous work on networks with stratified laminar flow where the fluids remain as distinct phases , we measured the separation function for this system , demonstrated that we could compute the functions via 3d navier - stokes simulations , and developed an approximate one - parameter model for use in network modeling . in that work gravity was normal to the plane of the network flow .it has been shown that , unlike the effective viscosity model , the exact form of the separation function has a dramatic effect on the types of equilibrium and dynamic behavior that may be observed .the two sample empirical separation functions we use in this work are show in figure [ fig : visc_plasma ] . when .the normalized volume fraction in vessels and of the node is plotted as a function of the flow in vessel normalized by the inlet flow .the dotted line denotes the case with no phase separation .a ) empirical function for microvascular blood flow as defined by equation [ eq : plasma_skim ] and b ) empirical function for stratified laminar flow as defined by equation [ eq : baby_karst ] . , width=245 ] when .the normalized volume fraction in vessels and of the node is plotted as a function of the flow in vessel normalized by the inlet flow .the dotted line denotes the case with no phase separation .a ) empirical function for microvascular blood flow as defined by equation [ eq : plasma_skim ] and b ) empirical function for stratified laminar flow as defined by equation [ eq : baby_karst ] ., width=245 ] it is important to realize that phase separation at a single network node is a common phenomena in many two fluid systems and that many other types of behaviors exist as noted in section i. in network modeling , it is common to use simple empirical functions with a single fit parameter which can be tuned to approximately model experimental data .it is recognized that such simple functions are limited in their accuracy , but they are useful in allowing for easy incorporation into analysis and providing some insight into expected experimental behavior . for example , a common fit function for microvascular flows is where is the normalized flow in branch shown in the schematic of figure [ fig : visc_plasma ] .we selected in figure [ fig : visc_plasma]a ; a typical value used in prior studies .for stratified flow a simple fit function which represents the basic form of the experimental data is , where in figure [ fig : visc_plasma]b we selected , which is observed in typical experimental data . in both cases the fit parameters and depend on many of the other physical parameters in the system .we use the generic function to represent the phase separation constitutive law , whatever the physical system . for any function , the volume fraction in vessel is connected to the function through conservation , , which can be expressed as , a few remarks are worth making about the phase separation functions shown in figure [ fig : visc_plasma ] . note that the phase separation function for microvascular blood flow is symmetric under the exchange , _i.e. _ , it does not matter how we arrange the downstream vessels .the same is not true for the phase separation function of stratified flow ; different arrangements of the downstream vessels results in different phase separation . in both caseswe note that the volume fraction entering vessel is zero when , but this condition does not hold in any general sense . for all phase separation functions must hold .we now develop a general network model based on conservation laws .we treat the viscosity function and phase separation function as constitutive laws which we must select in order to make concrete calculations of a real physical system .for all cases we use the viscosity law for mixed newtonian fluids , equation [ eq : viscosity ] . in our network modelwe assume that the function is known by some means , either experiments or computational fluid dynamics .while we confine our results to equations [ eq : plasma_skim ] and [ eq : baby_karst ] , the methods we develop are general and can be applied to any -smooth constitutive law for the physical system of interest .we assume the volume fraction in vessel is governed by the first order wave equation where is length of vessel .the propagation velocity in vessel is proportional to the volumetric flow rate in the vessel , where is the diameter of the vessel . at each node in the networkthe inlet flow rates equal the outlet flow rates , namely and where positive is assumed to go from inlet to .the flow rates may vary in time , but each is constant throughout the vessel . in this workwe consider steady boundary conditions , thus , , , and are constants .to solve equation [ eq : wave_dimensions ] we need boundary conditions at the entrance of the three vessels .the boundary conditions are supplied by the conservation of each constituent at the node , namely , the third required boundary condition depends upon the direction of . when the flow is such that is positive ,the boundary condition for vessel is given by ; see figure [ fig : visc_plasma]b .when the flow is such that is negative , the boundary condition in vessel is . once the direction is established and the inlet volume fraction to vessel is determined by the phase separation function , equations [ eq : bc1 ] and [ eq : bc2 ] provide the inlet volume fractions to vessels and .the pressure drop across any vessel is given as .in laminar flow , the hydraulic resistance of branch , , is a function of the spatially averaged viscosity , through poiseuille s law , kirchoff s potential law applied around the network loop , _ i.e. _ , , provides an equation for the flow in , the above formulation is a closed problem for the 1d wave propagation of volume fraction in the connected vessels of our network .it is worth noting that the model is symmetric under the exchange , , , and ( vessel ) ( vessel ) .a dimensionless version of the governing equations can be derived by scaling space and time according to so that each vessel s spatial dimension is normalized to its length , time is scaled by the ratio of the total volumetric flow rate in the network to the total volume of the network , and flow rates are normalized to the total flow .the dimensionless governing equation for vessel is the boundary conditions become , with the phase separation function at the appropriate node providing the final third boundary condition , in dimensionless terms , the flow equation becomes where is the nominal resistance in vessel , and is the average relative viscosity in vessel as defined by equation [ eq : viscosity ] .there are 8 dimensionless parameters that enter the problem .the network geometry introduces four parameters .two of these are defined by the ratio of the nominal resistances , and .the other two are defined by the ratio of the volume of the vessels , and .in addition , there are three inlet parameters we are free to control , , , and .the fluid system chosen determines the viscosity function , and the contrast between the two phases , , supplies another parameter . finally , the phase separation function is critical to the behavior , though the function is set by the physical system and is not something we can easily control in a given physical experiment .the parameter space is quite large , thus direct numerical solution of the problem is not practical for spanning parameter space and motivates us to find a reliable method for tracking regions of stability and instability . in what follows we use the dimensionless formulation , and for convenience we drop the `` hat '' notation .at equilibrium , the volume fraction in branch is constant throughout the branch and equal to the entrance volume fraction , .equations [ eq : bc][eqn : floweqn ] are sufficient to solve for the equilibrium flows and volume fractions .the viscosity functions and in turn hydraulic resistances can each be written as functions of the equilibrium flow rate .equation [ eqn : floweqn ] therefore defines a nonlinear equation in , and multiple solutions are possible .the parameter space is still large and consists of , , , , , and .we have explored the equilibrium problem for the 3-node network in prior publications .et al . _ theoretically studied the problem in the context of microvascular blood flow and demonstrated that multiple equilibrium states were indeed possible .the observation that multiple equilibria were possible in regimes where no phase separation takes place motivated us to design a table - top experiment using water and sucrose solution . in this work we derived a simple condition for the onset of multiple equilibria , and confirmed the predictions in the laboratory .more recently , karst _ et al . _ designed an experiment using fluids undergoing laminar stratified flow to attempt to mimic the phase separation effect in microvascular blood flow . in that work we predicted and observed multiple equilibria and derived a simple criteria for its onset . in this current paper , we focus on the problem of stability and dynamics .however , for completeness we synthesize prior results on equilibria here using our current models and terminology .we refer the interested reader to the above publications for more details . in figure[ fig : mveq]a we show three sample equilibrium curves in the plane .we have chosen , , , and the three curves correspond to viscosity contrast and . herewe are using equation [ eq : plasma_skim ] for the phase separation function from microvascular blood flow .the parameters selected here are relevant later in our analysis of the dynamics .there exist multiple equilibria if for any given value of there exist multiple values of . for the chosen parameter values, there is a single equilibrium for a viscosity contrast of , but multiple equilibria for contrasts of and .the window of multiple equilibria grows with increasing viscosity contrast . .when the viscosity contrast is set to 10 , we observe a small window of multiple equilibria about , and as the viscosity contrast is increased to 30 , this window widens .( b ) phase diagram in the parameter space . in the gray region ( i ), there exists a single equilibrium . in the orange region ( ii ) , there exist multiple equilibria . regions ( i ) and ( ii ) are delineated by the saddle - node bifurcation curve ( black ) which emerges from ( 0.5,3.4 ) ., width=251 ] .when the viscosity contrast is set to 10 , we observe a small window of multiple equilibria about , and as the viscosity contrast is increased to 30 , this window widens .( b ) phase diagram in the parameter space . in the gray region ( i ), there exists a single equilibrium . in the orange region ( ii ) , there exist multiple equilibria . regions ( i ) and ( ii ) are delineated by the saddle - node bifurcation curve ( black ) which emerges from ( 0.5,3.4 ) . ,width=245 ] while the width of the multiple equilibria window involves an in - depth calculation , the onset point is relatively straight - forward to calculate .notice from figure [ fig : mveq]a that multiple equilibria are born in a saddle - node bifurcation when the equilibrium curve folds over at .a condition for onset can therefore be obtained by setting , which yields , recall that , where depends only upon the geometry ( and ) of vessel a while depends upon the phase distribution within the network .thus to evaluate the hydraulic resistances we must know the network geometry and the phase distribution inside the network when .if the inlets are not equal fluids we must be careful to consider the above criteria as and .we can simplify the multiple equilibrium criteria for the two cases considered in this paper .first , we limit our study to cases where we drive the network with identical inlet fluids , , and we do not need to consider the direction with which we approach .second , for the two - phase separation functions examined in this paper , the volume fraction in vessel is zero when ; in both empirical phase separation functions given by equations [ eq : plasma_skim ] and [ eq : baby_karst ] . in this particular case ,the condition for multiple equilibrium becomes since at , and , the criteria can be further reduced to where is the relative viscosity of the inlet fluid . for the network geometry used in figure [ fig : mveq]a , ,thus the criteria for multiple equilibria approximately reduces to , or . in figure[ fig : mveq]b we show the region of multiple equilibria in the plane for the parameters previously discussed . notice that the onset point agrees with the above calculation and occurs at . as the viscosity contrast is increased the width of the window increases .changing the network geometry and inlet fluids changes the details of the multiple equilibria window but not its existence . in the rest of this paper , we consider the stability of the equilibrium solutions and the resulting nonlinear dynamics .we assume that the network is initially in equilibrium , _ i.e. _ , , for all and .we introduce perturbations beginning at time on the flow rates and volume fraction profiles so that substituting equations [ eqn : perturbphi ] and [ eqn : perturbq ] into the appropriate governing equations and retaining only the linear terms results in a first order wave equation describing the propagation of the volume fraction perturbation in each branch , where is the dimensionless steady state transit time in branch . an expression for the flow perturbation can be computed by expanding the flow equation about the equilibrium , relative perturbations to the resistance in each branch is determined by finally , perturbations to the boundary conditions are required . without loss of generalitywe assume that the flow in is from inlet 1 to inlet 2 and the perturbation to the volume fraction entering is then where is the derivative of the plasma skimming function .the perturbations to the volume fraction entering and are given by mass fraction .for vessel we have and for vessel we have equations [ eq : wave ] [ eq : vesselb ] constitute the linearized equations .we assume traveling wave solutions of the form which automatically satisfy equation [ eq : wave ] .substituting into equation [ eq : resist ] and integrating gives where further substitution into equation [ eq : flow ] results in substituting into equation [ eq : vesselc ] results in finally , substitution into equations [ eq : vessela ] and [ eq : vesselb ] gives and equations ( [ eq : linear1])-([eq : linear4 ] ) constitute 4 linear equations in the 4 unknowns and .non - trivial solutions exist if and only if the following characteristic equation has roots , where the coefficients are given by and we have dropped the * for convenience .the characteristic equation has three delay times , but is composed of linear combinations of four transcendental functions .two of these arise from the propagation delay in vessel and vessel ( coefficients a " and c " ) .a third term arises due to perturbations in the flow entering vessel ( coefficient b " ) .the last contribution arises due to perturbations to the volume fraction in vessel which propagate into and through vessel ( coefficient d " ) .it is worth noting that in the absence of nonlinear viscosity , all of the coefficients are zero and the equilibrium is stable .furthermore , no plasma skimming would imply and so that only the b " coefficient would remain .it is straightforward to show that only real roots exist and thus oscillatory dynamics are ruled out .nonlinear viscosity and plasma skimming are therefore necessary for the emergence of oscillatory behavior .a root of the characteristic equation satisfies the relations we will see that these relations are useful for identifying hopf bifurcations that can be used as starting points for numerical continuation through the large parameter space of the system .the network model includes 8 dimensionless parameters as well as the constitutive laws for viscosity and phase separation .it is difficult to make general predictions without selecting a set of constitutive laws since these relations critically determine the system behavior . since general statements about any arbitrary system are difficult to make , we present two physically realistic systems to demonstrate the methodology for analyzing stability . in example 1we take the well - studied problem of microvascular blood flow , and in example 2 we take stratified laminar flow of two newtonian fluids , a system for which we have conducted prior equilibrium experiments . for our first example, we use the phase separation model for microvascular flow , equation [ eq : plasma_skim ] with .we use the simple arrhenius law for viscosity in the vessels after the initial splitting at the inlets , equation [ eq : viscosity ] .the arrhenius law has the basic functional form as the empirical laws for blood viscosity . for the network we use parameters ; ; unless otherwise noted . in dimensionlessterms , , , and .( blue ) and equation [ eqn : eigim ] ( orange ) with and a viscosity contrast of in the microvascular blood flow model .each intersection ( black dot ) indicates a hopf bifurcation of frequency occurs at the pair associated with the continuation index ., width=377 ] the traditional approach to detect hopf bifurcations is to monitor the test function defined by the product of the imaginary components of the eigenvalues along the continuation of an equilibrium . in the systems with transcendental characteristic equations in which the eigenvalues can not be directly computed ,a more powerful tool can be applied by monitoring the test function , where is the jacobian of the equilibrium relation and denotes the bialternate product . here , we employ a more specialized approach in order to simultaneously detect a hopf bifurcation and determine its frequency . at equilibrium ,the hydraulic resistances in each branch are functions of the equilibrium flow rate .we can therefore rewrite equation [ eqn : flow ] as . to track an equilibrium through parameter space , we parameterize by and perform numerical continuation on the equilibrium relation forming a parametric equilibrium curve in the plane .hopf bifurcations can be identified along the equilibrium curve by monitoring the relation defined by substituting in equations [ eqn : eigre ] and [ eqn : eigim ] .since the values of and the steady state transit times are fixed at each pair along the continuation , we can define and to be the left sides of equations [ eqn : eigre ] and [ eqn : eigim ] with , respectively , without loss of generality .then any intersection of the zero contours of and indicates a hopf bifurcation of frequency occurs at the pair associated with index .we see an implementation of this methodology with a viscosity contrast of 50 in figure [ fig : hopfcontours ] .this figure is horizontally symmetric because the underlying network is geometrically symmetric .we observe a collection of low frequency hopf bifurcations that occur near the saddle - node bifurcations located at indices 886 and 1515 in figure [ fig : hopfcontours ] .we also observe pairs of higher frequency hopf bifurcations that occur away from the saddle - node bifurcations . as the viscosity contrastis increased , additional bands of instability appear , and these bands grow to encompass the entirety of the upper and lower branches of the equilibrium curve. we can confirm that figure [ fig : hopfcontours ] accurately predicts the presence of sustained oscillations through direct numerical simulation . as an example , we choose the equilibrium pair which is located in the left - most band of instability in figure [ fig : hopfcontours ] .the eigenvalue - based prediction is shown in figure [ fig : eig]a . herewe plot the zero contours of equations [ eqn : eigre ] and [ eqn : eigim ] in the plane so that an intersection of the contours at some pair indicates that is a solution to the equation [ eqn : char ] .note the dominant eigenvalue has positive real part and imaginery part .we then perform a direct numerical simulation of equation [ eqn : pde ] ( with appropriate boundary conditions ) at the same parameters .we initialize the simulation to the equilibrium state and provide a small numerical perturbation .a limit cycle grows from the unstable equilibrium solution as seen in figure [ fig : eig]b . when the system reaches a periodic steady state the limit cycle has period , which corresponds to a dimensionless angular frequency , in good agreement with our linear prediction .if we check the frequency in the simulation earlier when the amplitude is infinitesimal the frequency matches the linear analysis exactly .we also confirm that our predicted growth rate matches the simulation .( blue ) and equation [ eqn : eigim ] ( orange ) in the microvascular blood flow model .each intersection ( black dot ) indicates a solution to the characteristic equation [ eqn : char ] .note the sole eigenvalue with positive real part is .( b ) limit cycle about equilibrium computed from direct simulation .the period of the oscillation agrees with the analysis.,width=245 ] ( blue ) and equation [ eqn : eigim ] ( orange ) in the microvascular blood flow model .each intersection ( black dot ) indicates a solution to the characteristic equation [ eqn : char ] .note the sole eigenvalue with positive real part is .( b ) limit cycle about equilibrium computed from direct simulation .the period of the oscillation agrees with the analysis.,width=258 ] we can begin to form intuition about the presence and location of hopf bifurcations by varying the viscosity contrast for a fixed geometry and tracking the associated bands of instability .an example is shown in figure [ fig : hopfhighlight ] . herewe plot the equilibrium curve in the plane at three values of the viscosity contrast .this is the same figure and parameters as figure [ fig : mveq]a with the stability information superimposed .these curves are experimentally relevant as one can build a fixed network and then adjust the relative flow of the two inlets to move left and right along the -axis .experimentally we can adjust the inlet fluids to adjust to viscosity , here the three curves represent the equilibrium solution for viscosity contrasts of 2 , 10 , and 30 .when the viscosity contrast is 2 , the equilibrium curves are single - valued and there are no hopf bifurcations . at a viscosity contrast of 10 , the equilibrium curve becomes multi - valued over a small range around .for this range of there are two possible states , one with positive and negative .we also see a region of instability emerges right at the location where the curves fold over .this hopf bifurcation is at low frequency and in numerical simulations we find that there is no stable limit cycle .the amplitude of oscillation grows until the system flips to the other stable state on the equilibrium curve .as we increase the viscosity contrast to 30 the region of multiple equilibrium grows and a new region of instability emerges along the equilibrium curve .this region is a high frequency oscillation which results in a stable limit cycle as seen in figure [ fig : eig]b . for the viscosity contrast of 30 ,the picture is that as we experimentally move continuously from to we would start by observing a single , stable , equilibrium flow state with negative .as we increase we would see a limit cycle oscillation emerge around which would persist until .since this limit cycle exists outside the region of bistability , there is no other state for the system to move toward .after is increased beyond 0.37 the limit cycle disappears and the system returns to a single stable equilibrium state with negative . at large amplitude oscillation emerges and kicks the system to the other stable equilibrium state with positive . as we continue to increase oscillations would emerge again at , this time with positive . finally at would return to a single , stable equilibrium with positive . in this examplethe curves are symmetric about because the network geometry is symmetric .the region of instability changes as we increase the viscosity contrast . generally ,the window with multiple equilibrium states and the regions of instability increase with viscosity contrast .this behavior is demonstrated in the phase diagram of figure [ fig : bifurc ] which is an expansion of the phase diagram shown previously in figure [ fig : mveq]b .here we have identified both saddle - node and hopf bifurcations in figure [ fig : hopfcontours ] and used the relations to track the saddle - node and hopf bifurcations , respectively , through parameter space .note that this phase diagram is symmetric about due to the symmetry of the network geometry .parameter space for the microvascular blood flow model . in the gray region ( i ) , the system exhibits a unique equilibrium state . in the orange region ( ii ) , two stable equilibrium states exist .the yellow region ( iii ) represents parameters which support one unstable oscillation and one stable state .the dark blue region ( iv ) represents configurations in which have a single oscillatory state .networks in the light blue region ( v ) support two oscillatory states .the regions are separated by curves marking saddle - node bifurcations ( black curves ) , the lowest frequency hopf bifurcation ( blue curve ) , and higher frequency hopf bifurcations ( red / gray curves ) . at high viscositycontrasts the instability is comprised of multiple frequencies.,width=453 ] the hopf and saddle - node continuation curves delineate several regions of behavior . if we start with a low viscosity contrast , _i.e. _ , less than , we have single valued equilibrium curve for any .as we increase the viscosity contrast , multiple equilibrium behavior emerges from ( point a ) .as soon as multiple equilibrium exists , a small window of instability emerges right at the point that the equilibrium curves fold over .this is a narrow region of a low frequency , large amplitude oscillation which will generally kick the system to the stable part of the multiple equilibrium curve .recall the behavior for a viscosity contrast of 10 from figure [ fig : hopfhighlight ] .as we increase the viscosity contrast to , a small region of high frequency instability emerges at and ( point b ) .this first region of high frequency instability represent the emergence of a single frequency limit cycle .the emergence of this instability occurs outside the multiple equilibrium region , thus the system must oscillate around the equilibrium point .this behavior was seen at a viscosity contrast of 30 in figure [ fig : hopfhighlight ] . as we increase the viscosity contrast to , the instability curve crosses into the region of multiple equilibria ( point c ) . in this region , we find that the system may tend to the oscillatory solution with positive ( negative ) or the stable solution with negative ( positive ) , depending on the initial condition . as the viscosity contrast is increased to , the hopf bifurcation curves associated with the positive and negative cross at . at this pointwe have the co - existing limit cycles at ( point d ) ; the system has two possible limit cycles one with positive and another with negative .we also see that at this viscosity value we have multiple frequency components meaning more complex dynamics are expected .finally , as we increase the viscosity contrast to , the region of instability reaches the boundary where and ( point e ) ; thus instability encompasses the entire range of inlet flow rates .all values of are expected to be unstable and if we are inside the multiple equilibrium region we expect to always find co - existing limit cycles .the presence of hopf bifurcations is strongly dependent on the viscosity contrast , and it not surprising that tuning this parameter also affects the amplitude and frequency of the associated oscillations .we saw in figure [ fig : bifurc ] that at high viscosity contrast we could have coexisting limit cycles and oscillations with multiple frequency components . in these casesour linear analysis can not tell us the complete dynamics so we use direct numerical simulation of equation [ eqn : pde ] to explore the final dynamics . in figure[ fig : increasedelta ] we plot three time series of the flow in the middle branch . as viscosity contrastis increased from 50 to 500 to 1500 , the amplitude of the limit cycle grows considerably .for example 2 , we use the phase separation model for stratified laminar flow , equation [ eq : baby_karst ] with .we again use the simple arrhenius law for viscosity in the vessels after the initial splitting at the inlet .this viscosity law could be realized in experiments if mixing was induced after the initial inlet split or if the tubes a , b , and c were long enough to allow molecular diffusion to mix the two phases .for the network we use similar parameters as the previous example ; ; unless otherwise noted . in dimensionless terms , , , and . note that in comparison to the microvascular example we have broken the symmetry of the diameters in vessels and . while we have no definite proof , we have been unable to detect any hopf bifurcations in a symmetric network subject to stratified laminar flow .we apply the same technique discussed in example 1 to detect hopf bifurcations .the zero contours of and with a viscosity contrast of 50 are shown in figure [ fig : hopfcontours_strat ] .as before , each intersection of these curves indicates a hopf bifurcation of frequency occurs at the pair associated with index .this figure is not symmetric because the underlying network is not .all the intersections are on the right side of the figure indicating that oscillations will occur when is greater than .[ fig : hopfcontours ] , here there is no differentiation between low frequency hopf bifurcations that emerge near the saddle - node bifurcation and higher hopf bifurcations that emerge away from it .all hopf bifurcations in figure [ fig : hopfcontours_strat ] emerge near the saddle - node bifurcation and grow towards the boundary . at a viscosity contrast of 50the boundary has been destabilized .this fact is experimentally relevant , as oscillations would be observed with inlet 2 in figure [ fig : schematic ] shut off , leading to a simplified experimental design . as the viscosity contrastis increased , additional bands of instability appear and grow from the saddle - node bifurcation towards the boundary .( blue ) and equation [ eqn : eigim ] ( orange ) with and a viscosity contrast of 50 in the stratified laminar flow model .each intersection ( black dot ) indicates a hopf bifurcation of frequency occurs at the pair associated with the continuation index.,width=377 ] in figure [ fig : hopfhighlight_strat ] we vary the viscosity contrast for a fixed geometry and track the associated bands of instability along the equilibrium curves . in this figurewe plot the equilibrium curve in the plane at four values of the viscosity contrast .these curves are experimentally relevant as one can build a fixed network , change the inlet fluids to adjust the viscosity contrast and adjust the relative flow of the two inlets to move left and right along the -axis .note that all the curves pass through the point when .this trivial point is determined by noting that when all the flow from inlet 1 goes through branch a and all the flow from inlet 2 goes through branch b. since there is no flow in c , the pressure drop across a and b must be the same .thus , the trivial point is given by for our example and , thus .grow towards the boundary ., width=340 ] the four curves shown in figure [ fig : hopfhighlight_strat ] represent the equilibrium solution for viscosity contrasts of 2 , 10 , 20 , and 30 . when the viscosity contrast is 2 , the equilibrium curves are single - valued and there are no hopf bifurcations . at a viscosity contrast of 10 ,the equilibrium curve becomes multi - valued over a small range around .for this range of there are two possible states , one with positive and negative .we also see a region of instability emerges at the locations where the curves fold over .the instability band with positive is much wider than the one with negative .as we increase the viscosity contrast to 20 the region of multiple equilibrium grows as does the band of instability .this behavior is different than the microvascular example in that the band of instability grows out of the point where the equilibrium curves fold over .we see that only the band with positive grows significantly in size .when we increase the viscosity contrast to 30 the instability band encompasses the whole branch of the positive equilibrium curve .we construct the phase diagram shown in figure [ fig : bifurc_strat ] to demonstrate the different possible states of the system . if we start with a low viscosity contrast , _i.e. _ , less than , we have a single equilibrium state for any .as we increase the viscosity contrast , multiple equilibrium behavior emerges from when the viscosity contrast is ( point a ) .as soon as multiple equilibrium exists , a hopf bifurcation ( denoted by the red curve ) emerges from the multiple equilibrium point .this instability occurs on the branch where ; recall the behavior for a viscosity contrast of 10 from figure [ fig : hopfhighlight_strat ] .as we increase the viscosity contrast to , this region instability grows and eventually leaves the multiple equilibria region ( point b ) .after this viscosity contrast is exceeded we may have branches of the equilibrium curve that are unstable via hopf bifurcation , and there is no other possible stable equilibrium state .as we increase the viscosity contrast to , the the region of instability reaches the boundary where ( point c ) ; thus instability encompasses the entire branch of the equilibrium curve where .parameter space for the stratified laminar flow model . in the gray region ( i ) , the system exhibits a single unique equilibrium state . in the orange region ( ii ) , two stable equilibrium states exist .the yellow region ( iii ) represents parameters which support an unstable oscillation and one stable equilibrium state .the dark blue region ( iv ) represents parameters with a single oscillatory state .the regions are separated by curves marking saddle - node bifurcations ( black curves ) and hopf bifurcations ( red / gray curves).,width=491 ] a very narrow band of instability also grows along the right edge of the multiple equilibrium boundary .this band corresponds to the instability region seen for negative at the fold in the equilibrium curve in figure [ fig : hopfhighlight_strat ] .this region is so narrow and only exists right the multiple equilibrium boundary that is likely of little practical interest and not observable . due to the broken symmetry for this parameter set ,we only see significant instability for cases where .thus unlike the example with microvascular blood flow , here we do not find co - existing limit cycles .as in example 1 , while the linear analysis can provide some insight into the types of behaviors we may see , we must resort to full numerical simulation in order to see the complete dynamics . in figure[ fig : timeseries_strat ] we show some sample dynamics for the stratified flow model with . with the viscosity contrast set to 30 ,we observe a relatively sinusoidal oscillation in . as the viscosity contrastis increased , figure [ fig : bifurc_strat ] shows that higher frequency bands of instability grow towards the boundary .with the viscosity contrast set to 50 , for instance , there are 3 distinct bands of instability that have crossed the boundary .these additional frequencies lead to richer temporal dynamics in the flow as seen in the middle pane of figure [ fig : timeseries_strat ] . as the viscosity contrastis increased to 500 , progressively higher frequency hopf bifurcations have crossed the boundary .this broader spectrum manifests as abrupt changes in the flow rate as seen in the bottom pane of figure [ fig : timeseries_strat ] . for different values of viscosity contrast , 30 , 50 , and 500 from top to bottom .in each phase plot the equilibrium solution is shown as the dot . as the viscosity contrast in increased, progressively higher frequency bands of instability reach the boundary in figure [ fig : bifurc_strat ] .the presence of these higher frequencies result in richer temporal dynamics in at higher viscosity contrasts.,width=472 ]we have demonstrated a rich set of dynamics which emerge from simple fluid networks with practical and experimental relevance .we have presented a method for analyzing these fluid networks which has a large number of important free parameters . through direct numerical simulation ,the parameter space is too large to span in a systematic way .we find large ranges of parameter space in which equilibrium solutions to the phase and flow distribution within a network are unstable and spontaneous oscillations may emerge .we also find complex nonlinear dynamics for large viscosity contrasts .while we have presented our results in a manner which is experimentally relevant , the details of the constitutive laws are such that they are critical to the exact predictions of stability and are difficult to experimentally control .thus while our laws for viscosity and phase separation at a node are realistic for blood flow , the viscosity contrast of blood ( contrast between plasma and red cell rich fluid ) is limited to approximately 10 , thus the contrast of 30 or 50 to see oscillations is probably still out of experimental range .however , through careful selection of the network parameters it may be possible to find examples which occur in realistic experimental systems .further , the range of parameters where spontaneous oscillations exist for this network is much broader and more realistic than the equivalent 2-node network , thus adding an additional network branch might be sufficient to bring the dynamics into experimental space . on the other hand ,the predictions for the stratified network model are well within the range of what is possible experimentally .the stratified system has the advantage that viscosity is a more easily controlled parameter through the selection of the fluids and the flow state is a natural consequence of buoyancy effects .on going work is aimed at direct observation of these predictions .this work was supported in part by the national science foundation under contract no . dms-1211640 .
nonlinear phenomena including multiple equilibria and spontaneous oscillations are common in fluid networks containing either multiple phases or constituent flows . in many systems , such behavior might be attributed to the complicated geometry of the network , the complex rheology of the constituent fluids , or , in the case of microvascular blood flow , biological control . in this paper we investigate two examples of a simple three - node fluid network containing two miscible newtonian fluids of differing viscosities , the first modeling microvascular blood flow and the second modeling stratified laminar flow . we use a combination of analytic and numerical techniques to identify and track saddle - node and hopf bifurcations through the large parameter space . in both models , we document sustained spontaneous oscillations and , for an experimentally relevant example of parameter analysis , investigate the sensitivity of these oscillations to changes in the viscosity contrast between the constituent fluids and the inlet flow rates . for the case of stratified laminar flow , we detail a physically realizable set of network parameters that exhibit rich dynamics . the tools and results developed here are general and could be applied to other physical systems .
in molecular systems biology , the biochemical reaction networks represent the complex biological systems with a large number of biological components and many interactions among themselves .the examples of the networks are gene regulatory networks , signal transduction networks , and metabolic networks which process the cellular information , make the cell - fate determining decision , and are inherently coupled together .these networks are subject to noise from various sources , i.e. , intrinsic and extrinsic noise .the dynamical behaviors of those networks determine the physiology and phenotype of the living organimsms .it is of great importance that we deepen our understanding of the interplay between the noise , the structual properties of the biochemical reaction network , and the resulting dynamics .this new knowlege enables us to understand the mechanims of how the signaling pathways work in natural organisms , to intervene those pathways to modify the dynamics of genes and proteins , and to design a new synthetic circuit with a desired functionality .our current paper is to pursue the deeper understanding of the design principles of stochastic oscillations arising from biochemical reaction networks .presently our work is limited to small - sized networks , but lays a good foundation for further generalization to an arbitrarily - sized network .the interplay between the structural properties of the chemical reaction networks and the dynamics that the networks can potentially admit has been deeply studied in the chemical reaction network theory ( crnt ) . in the deficiency - zero theorem of the crnt , any weakly reversible network with both mass action kinetics and zero dificiencyis proven to have only one positive and locally asymptotically stable statedy state .the structral properties of a chemical reaction network specified by the reversability and the deficiency of the network , regardless of the kinetic details , sets a limit on the qualitative dynamical properties that that particular network can potentially admit .however , because the deficiency theorems are constructed on the strict conditions , their usefulness to the systems and network biology are limited .moreover , it has not been explored how the structure of the chemical reaction networks influence the dynamical capacities of the stochastic chemical reaction networks .stochastic fluctuations are ubiquitous and prevalent in a cellular environment and consequently influence the operation and functioning of the gene regulatory and cell signaling networks .those fluctuations can possibly confer a new dynamical capability to the network or destroy an existing dynamical capacity : e.g. , noise - induced bistability , noise - indued stabilization , noise - induced synchronization , and noise - induced oscillation .thus , it is of tremendous value to systematically define the landscapes of stochastic dynamical behavior as a function of the structural properties of the biochemical reaction networks . among many cellular dynamical phenomena, this paper concerns the biochemical oscillation .oscillations are prevalent in cellular biology partly because of the living organisms adaptation or readiness to a periodic or abrupt environmental change .the few examples of the cellular biochemical oscillations are cell cycle , circadian rhythm , nf- , p53 , developmental clock , neural rhythms , and hormone . to quantitatively understand such periodic phenomena ,many theoretical models have been put forth and extensively studied .the previous studies revealed a few important requirements for biochemical oscillators : i ) negative feedback loops with sufficient time delay , ii ) non - linearity of kinetic laws , and iii ) appropriate balancing of synthesis and degradation rates of chemical species .one of the above three restrictions is the structral condition of the underlying networks : negative feedback loops with time delay .a negative feedback loop is the cyclic pathway consisting of the odd number of inhibitory edges .the time delay can be realized in the networks in two different manners : one by having the explicit time delay in biochemical interaction and another by having positive feedback loops in addition to the negative feedback loops .our primary questions in this paper are how the noise relaxes or tightens the above three conditions for biochemical oscillations .particularly , we are very keen to _ enact the new requirements _ for stochastic biochemical oscillators and to compare the stochastic oscillatory behaviors between networks with negative feedback loops alone and the networks with coupled positive and negative feedback loops . in our previous work , we investigated the requirements for stochastic biochemical oscillators and compared the networks with only negative feedback loops .the networks consist of three biochemical species governed by mass action kinetics and are allowed to have only negative feedback loops .we proved that the negative feedback loops are required for stochastic oscillation in these small - sized networks with mass action kinetics .then , we numerically demonstrated that all the networks have one positive and locally stable steady state and the stochastic fluctuations enable all of them to exhibit prominent stochastic oscillations , i.e. , coherence resonance or noise - induced oscillation , in the various biologically feasible parameter ranges .stochastic fluctuations confer an additional dynamical capacity of stochastic oscillation to the group of biochemical reaction networks with negative feedback loops .numerous biochemical oscillators are equipped with coupled positive and negative feedback loops .the few examples are mitiotic trigger in xenopus , spikes / oscillations , circadian clock , galactose - signaling network in yeast , and p53-mdm2 oscillator .the addition of positive feedback loops to the biochemical oscillators can confer some functional and performance advantages .the networks with the positive - negative interlinked feedbacks can have the inreased frequency - tunable range and the enhanced robustness of biochemical oscillations .additionally , the positive - negative interlinked feedback loops can give rise to a variety of dynamical behaviors such as monostability , bistability , exitability , and oscillations in the change of relative strength of the feedback loops and even make the cellular signal responses more desirable in noise environment .it seems that the nature must have forced the living organisms to evolve to favor the networks with interlinked postive and negative feedback loops because of their many functional advantages . in this present paper, we add one more favorable point to the long list of the advantage of networks with positive and negative interlinked feedbacks : networks with positive - negative coupled feedbacks are much better noise - induced oscillators than networks with only negative feedback loops in various biologically feasible parameter ranges .again , we consider small - sized networks with mass action kinetics and two types of repression : repression by proteolysis and repression by transcriptional control . we generate the exhaustive list of all possible network structures with interlnked feedback loops , resulting in the sixty - three networks with the differential coupling of positive and negative feedback loops , and model each network with a chemical master equation which is approximated to a linear fokker - planck equation through van kampen s system size expansion .firstly , stochastic fluctuations indeed confer a new dynamical capacity of stochastic oscillation to all the networks with interlinked feedback loops . secondly , implementing a k - medoids clustering algorithm , we group the sixty - three networks into three performance groups based on the average values of signal - to - noise ratio and robustness and identify the common network architecture among the networks belonging to the same performance group .we learn that the coupling of negative and positive feedback loops ( pnfbl ) generally enhance the noise - induced oscillation performance better than the negative feedback loops ( nfbl ) alone .however , the performance of pnfbl networks depends on the size of the positive feedback loops ( pfbl ) relative to that of the nfbl in the networks ; the performance of the networks with the bigger pfbl than nfbl is worse than that of the networks with only nfbl .we single out two networks which stand out in their performance of noise - induced oscillation in all four different parameter ranges and for two different models of repression .thirdly , we also elucidate the machanisms of noise - induced oscillation and find that for most networks , the natural frequency sets the low bound of the resonant frequency of the noise - induced oscillation .this corroborates the present understanding of the noise - induced oscillation mechanisms ; the imaginary part of a complex eigenvalue from the jacobian matrix of a linearized network is responsible to generate an inherent rotation in the network and the noise enhances such a rotation .however , we also find that a few networks with purely real eigenvalues can admit a very prominent and amplified noise - induced oscillation .the paper is organized in the following way . in the result section ,we discuss the k - medoids clustering of sixty - three networks into three performance groups , the classification of networks by network structure , the identification of a common network structural signature per performance group , the distributions of signal - to - noise ratio from real and complex eigenvalues of jacobian matrices , and finally the correlation between the values of signal - to - noise ratio and the spiralness and proximity . in model andmethod section introduced all the networks under our consideration , the derivation of the power spectrum from the chemical master equation based on the van kampen system size expansion , parameter sampling method , definition of signal - to - noise ratio , prominence and robustness , the eigenvalue calculation , and the linear stability of the networks ., b ) , c ) and d ) . in bottom row ,repression is modeled by transcriptional control .the transcriptional repression rate is sampled from the following four intervals : e ) , f ) , g ) , and h ) ( .the values of all the other kinetic rate constants are sampled from a preset respective biologically feasible interval for a)-c ) and e)-g ) whereas all of them are sampled from the same interval for d ) and h).,width=566,height=377 ] the exhaustive enumeration of all the networks with three nodes and with at least one negative feedback loop reveals that there are sixty - three biochemical reaction networks .each of sixty - three networks has one positive and locally stable steady state in the chosen parameter ranges .if a network admits an unstable steady state , then that set of parameter values is resampled until the network admits the stable fixed point .the statics of resampling cases is provided in the supplimentary information .sixty - three biochemical reaction networks are classified , based on their values of prominence and robustness .prominence is a measure of the coherence and amplification of a stochastic oscillation , defined as the averaged maximum signal - to - noise ratio ( snr ) .robustness is the fraction of the sample points that admit the value of the maximum snr greater than one .the maximum snr means that the largest snr value among three chemical species . see the method section for the detailed definition of prominence and robustnss and how to calculate themto classify the networks , we use a machine - learning classification tool , k - medoids clustering .the kinetic rate constant values are sampled in two different manner , biologically and non - biologically . for biologically feasible parameter values ,all the kinetic rate constants are sampled from a preset biologically feasible range in table iv .particularly , we consider three different intervals of the repression strength corresponding to weak , intermediate , and strong repression .we do not know even the approximate values of repression strength and want to see the dependence of noise - induced oscillation performance on repression strength .for non - biological parameter values , all the kinetic rate constants are sampled from the same `` non - biological '' range .this is to see the overall behavior of the networks across a parameter space .[ fig1 ] shows the classification of the sixty - three networks into three performance groups .the performance of networks is based on the values of prominece and robustness of stochastic oscillation .we classify the biochemical reaction networks into three clusters , using -medoids clustering .the k - medroid performs robustly and the choice of k=3 is well justified by the occurrence of a dramatic drop at k=3 in the graph of sum distance ( error ) versus the numebr of clusters over almost all parameter ranges .the graph of sum distance versus the number of clusters for fig .[ fig1 ] is provided in the fig .[ fig8 ] in the supplimentary information . just like other searching algorithms ,the k - metroid clustering algorithm can be trapped in local mimina . to prevent such a trap , we start from many different initial conditions and optimize the performance of the k - metriod clustering algorithm .because of techanical issues , we could not evaluate one network ( network 38 ) for the case of repression by proteolysis and five networks ( networks 36 - 39 , 61 ) for the case of repression by transcriptional control .we label three clusters of networks as follows : red as the best performing networks , green as the medium performers , and blue as the worst performers . in fig .[ fig2 ] and [ fig3 ] , we will indentify the common network structural properties among the networks belonging the same color group .the clustering also enables us to identify the all - season best performer of stochastic oscillation , network 31 .this network 31 belong to the red group for both repression models and across four different sampling intervals of repression strength .the network 31 consists of two three - dimensional nfbl plus three two - dimensional pfbl .one interesting observation is that the clustering is mainly done by the prominence values and does not strongly depend on the robustness .we do notice the decreasing trend of the prominence of most of the networks as the repression strength increases for both repression models in fig .for example , as the repression strength increases from a ) to c ) , the prominence value of network 31 decreases respectively .this pattern is most conspicuously noticed with the networks in green performance group . and[ models2 ] . while color indicates the absence of data due to technical issues .black color denotes that all sample points yield .red , green or blue color is assigned to each network in each parameter sampling range and exactly corresponds to the classification from the -medoids clustering in figure 1 . in the right graphis the average performance value for each network : integer numbers , 0,1,2,3 are assigned to four colors , black , blue , green , and red , respectively .red line indicates the average performance value from ( a ) repression by proteolysis whereas blue line denotes the performace value from ( b ) repression by transcriptional control.,width=566,height=377 ] in fig .[ fig2 ] , the sixty - three networks are classified into eight different architectural classes by network structural properties as provided in the tables i and ii . this newtork structural classification is based on the number and the size of positive and negative feedback loops .those eight architectural classes of the sixty - three networks can form three architectural groups in a coarse - grained manner .the architectural group i includes all the networks with only negative feedback loops , networks 1 - 13 , whereas the architectural group ii ( classes ii through iv ) contains the networks with the coupling of smaller postive feedback loops and larger negative feedback loops , networks 14 - 39 .finally the architectural group iii ( classes v through viii ) consists of the networks with the coupling of larger positive feedback loops and smaller negative feedback loops and the networks with linear chains , networks 40 - 63 .the networks in the architectural group i dominantly belong to blue performance group for the case of repression by proteolysis whereas for repression by transcription control all three colors are mixed .a majority of the networks in the architectural group ii belong to green performance group for repression by proteolysis whereas for transcriptional repression , they belong to either green or red performance group with exception of a very few blue performance networks .the networks in the architectural group iii , for both cases of repression by proteolysis and transcription control , mostly belong to blue or black performance group . as far as the case of repression by proteolysis is concerned, the blue performance arises from two architectural groups ( or four architectural classes ) : group i ( class i ) and group iii ( classes v through viii ) .the clear message is that the networks either with exclusively negative feedback loops or with the larger - sized positive feedback loops and the smaller - sized negative feedback loops can admit the noise - induced oscillation in the chosen biologically feasible parameter ranges , but their oscillations are neither sufficiently well amplified nor coherent . in other words , the networks with the coupling of smaller - sized positive feedback loops and larger - sized negative feedback loops can admit quite well amplified and coherent stochastic oscillations .thus , the recommended network strucure for biochemical oscillators is one of the networks belonging to the architectural group ii . to see the clear relationship among the individual network structures , the parameter sampling range , and the performance of networks altogether , we represent each network not with the absolute value of average max snr but with the performance group to which it belongsthen , we average the performance colors of each network over the four different parameter sampling ranges for two different repression models , as presented in the rightmost subfigure of fig .since the average performance scores are quite noisy , it is very hard to appreciate any correlation between the individual networks and their average performance scores .but , it is easy to see that the average performance scores and the network architectural groups ( represented with three different background darknesses in fig .[ fig2](c ) are closely correlated for both repression models ( blue and red lines in fig .[ fig2](c ) ) . in other words ,the networks belonging to their respective architectural group demonstrate the similar performance , regardless of the detailed repression models . , where , represents the numerical value of a directed edge going from a node to a node and .e.g. , indicates x represses y while denotes x activates y. denotes the average value of over the properly rotated networks belonging to the same performance group .the vertical bars indicate the standard deviations .the average values of are visualized in the third and fourth columns .networks drawn adjacent to the boxes are the pictorial representations of common architecture of the networks in the same performance group .the same color is used for the common networks as that used in figure [ fig1 ] .type and thickness of the lines are determined by the average value of as discussed in the methods section .its positivity ( negativity ) denotes that the type of line is activation ( repression).,width=566,height=377 ] in fig .[ fig3 ] , we identify the common architectural properties among the networks that are classified into the same performance group as discussed in figure 1 . each of the networks belonging to the same performance group are rotated until they are all aligned such that the hamming distance among the networks are minimized with respect to a reference network .we then calculate the average edge value , defined as where denotes the directed edge from a node x to a node y as presented in fig .we graphically represent the average edge values into the common network architecture for each of the combination of two repression models and four parameter ranges .the average edge values are converted to four different thickness of an edge in the graphical representation : ( maximum thickness ) , ( medium thickness ) , ( minimum thickness ) and $ ] ( no edge ) . on the one hand , the networks belonging to the blue performance group share the following structural properties : one three - dimensional positive feedback loop coupled with many two - dimensional negative feedback loops .in other words , the blue networks are characerized with the strong presence of multiple two - dimensional ( small ) negative feedback loops and three - dimensional ( large ) positive feedback loop .we conclude that the networks with a larger positive feedback loop coupled with small negative feedback loops are likely to belong to the blue ( worst ) performance group . on the other hand ,the networks belonging to the green performance group are characterized by the strong presence of three - dimensional ( large ) negative feedback loop and one two - dimensional ( small ) positive feedback loop . in the networks belonging to the red performance group , we see the same strong presence of three - dimensional negative feedback loop as wellas the stronger presence of multiple two - dimensional positive feedback loops . the best network topology for highly amplified and coherent stochastic oscillators is a large negative feedback loop coupled with many small positive feedback loops .we demonstrate the existence of the network signature for each performance group consistently across two different repression models and over four different intervals of the model parameters .this is a remarkable numerical evidence of the relationship between network architecture and network performance .parameter samples .the labels ( a ) through ( h ) are the same as in figure 1.,width=566,height=377 ] in fig .[ fig4 ] , we present the distributions of the resonant frequencies of stochastic oscillations from the sixty - three networks which are colored according to their performance group as defined in fig .the resonant frequency is defined as a non - zero frequency at which the power spectrum peaks .the distribution is plotted with a subset of power spectra which has a peak at non - zero frequency , i.e. , a fraction of sample points that yield maximum snr 1 . since any noise - free network has a single stable fixed point , the existence of a resonance frequency is purely due to noise - induced effect .for the case of repression by transcriptional control ( e)-(h ) , the distributions of resonant frequencies are substantially overlapped over almost all of the networks , independent of their performance and network architectures . for the case of repression by proteolysis ( a)-(d ) , the distributions are not as homogeneous as for the case of repression by transcriptional control .the blue - colored networks tend to have more homogeneous distribution than the green - colored networks . for both repression models ,the peaks of resonant frequency distributions are shifted to the right as the repression strength increases .this is a clear signal that the noise - induced resonant frequency is positively correlated with the repression strength .the range of resonant frequencies falls within the biologically relevant range , between hours ( corresponding to ) to years ( corresponding to ) . in fig .[ fig5 ] are discussed the origins and mechanisms of the stochastic oscillations accompanied with the large maximum snr values . calculating the discriminant of a jacobian matrix with a set of randomly sampled parameter values for a chosen network , we accurately determine if all three eigenvalues of the 3 x 3 jacobian matrix are real or the mixture of real and complex conjugate pairs .we repeat the discrimimant calculation with the sets of randomly sampled parameter values from four different parameter ranges for the sixty - three networks with two different repression models .each subfigure in fig .[ fig5 ] presents three histograms of the maximum snr values : one from the samples yielding all real eigenvalues , another from the samples resulting in complex conjugate pairs , and the last from the total samples .the distributions from only networks 31 and 44 are presentd in the main text , but the distributions from the rest of the networks are provided in the figs .[ fig9 ] and [ fig10 ] in the supplimentary information .note that the log of the maximum snr values less than 0 do exist , but they are not presented here . for the case of network 31 and almost all of the networks in the network architectural group ii ( networks 14 - 39 ) , the complex eigenvalues make a dominant contribution to the distribution of maximum snr values .particularly the high values of maximum snr come exclusively from the samples yielding the complex eigenvalues .the imaginary part of the complex eigenvalues is indicative of a rotational flow in a deterministic system and the noise tends to amplify that rotational motion , resulting in the maximally amplified and coherent oscillation .so , this high correlation between the existence of complex eigenvalues and the high values of maximum snr is well understood .the further discussion of the complex eigenvalue cases will follow in fig .[ fig6 ] . a significant contribution of the real eigenvalues to the maxum snr values is detected from network 44 .all the networks belonging to the network architectural group iii ( networks 40 - 63 ) benefit the similar huge contribution from the real eigenvalues as shown in fig .[ fig9 ] in the supplimentary information .it is worthwhile to note that only a very few networks belonging to this third architectural group are in green performance group , but a majority of the networks are in blue performance group .this seems to suggest that the real eigenvalues do not elicit as large snr values as the complex eigenvalues do .however , the network 44 belongs to the green performance group and its intermediate performance is exclusively due to purely the real eigenvalues , leaving us puzzled . finally , the networks from the first network architectural group i ( networks 1 - 13 ) have the very similar distributions from both real and complex eigenvalues .the above obervations are primarily based on the proteolysis repression model as shown in fig .[ fig9 ] in the supplimenary information . for the case of repression by transcriptional control ,all of sixty - three networks dominantly produce the complex eigenvalues across four different parameter sampling ranges .the network architecture seems to determine the ratio between twos distributions from real and complex eigenvalues and to affect the dynamical behaviors accordingly .dots which represent the sets of randomly sampled parameter values . in the bottom panelis provided the plot of spiralness versus proximity for the same network 31 with repression by proteolysis .both axes are is in logarithmic scale .the colored heat map indicates the logarithmic value of maximum snr.,title="fig:",width=566,height=188 ] dots which represent the sets of randomly sampled parameter values . in the bottom panelis provided the plot of spiralness versus proximity for the same network 31 with repression by proteolysis .both axes are is in logarithmic scale .the colored heat map indicates the logarithmic value of maximum snr.,title="fig:",width=566,height=188 ] the fig .[ fig6 ] shows that the resonant frequencies are positively correlated with the imaginary part of the complex eigenvalues , , of the jacobian matrix of the linearized system . as seen in top panel of fig .[ fig6 ] , the plays a role of the lower bound for the resonance frequency and the noise increases the values of resonant frequencies . as briefly discussed in fig .[ fig5 ] , the imaginary part of the complex eigenvalues is related to the rotational angular speed of a deterministic flow in the vicinity of a stable fixed point . according to the subfigures in the upper panel of fig .[ fig6 ] , the resonant frequency of stochastic oscillation is just the angular speed of a deterministic stable spiral when the value of is very large .however , as the value of gets smaller , the resonant frequency gets larger than the angular speed of the deterministic stable spiral . the noise seems to push the system along the deterministic rotational flow rather than to go against it .this tendency is consistently observed in three networks , networks 31 , 35 , and 59 and under two repression models as presented in the fig .[ fig11 ] in the supplimentary information . also , as shown in the lower panel of fig .[ fig6 ] , the stochastic oscillation is primarily driven by two causes : the complex eigenvalues of the jacobian matrix being very close to the imaginary axes and having a larger imaginary part .those two causes are quantified by two numeric values of proximity and spiralness .the maximum snr values which are represented by color in heat maps are highly correlated with those two values of proximity as well as spiralness .proximity is defined as the ratio of the magnitude of noise ( along an eigen - direction to the real negative part of the complex eigenvalues ( ) whereas spiralness is defined as the ratio of the angular speed of a deterministic stable spiral ( ) to the negative real part ( ) which can be thought as an attractive force to the fixed point .thus the proximity measures the likelihood of making the fixed point stable and pushed the system away from its stable fixed point .the spiralness measures the coherence and amplification of the stochastic oscillations because is very closely related to the resonant frequency and is inversely proportional to the amplitude of the stochastic oscillations , i.e. , the peak amplitude of a power spectrum . in the fig .[ fig12 ] in the supplimentary information , one representative network is selected from each performance group : network 59 for blue , network 35 for green , and network 31 for red performance group . the fig .[ fig12 ] clearly shows that for both repression models , the networks belonging to the better performance group have the larger values of proximity and spiralness overall and the larger values of maximum snr .this paper is the first exensive comparative study pertaining to a stochastic dynamical behavior from stochastic biochemical reaction networks .most importantly , we numerically demonstrate the strong correlation between the stochastic behavior and the network architectural properties , namely the coupling patterns of positive and negative feedback loops . we investigate noise - induced oscillation in the networks with only three biochemical species whosereactions are governed by mass action kinetics and with the coupling of positive and negative feedback loops .modeling a set of so many stochastic biochemical reaction networks by using linear noise approximation and reading the signal - to - noise ratio values from the analytically derived power spectra , we show that all the networks with coupled positive and negative feedbacks are capable to admit the noise - induced oscillations in biologically feasible parameter ranges .also , using a k - metroid clustering algorithm , we group the sixty - four networks into three performance groups and identify the common network architecture among the networks belonging to the same performance group .we learn that the coupling of negative and positive feedback loops ( pnfbl ) generally enhance the noise - induced oscillation performance better than the negative feedback loops ( nfbl ) alone .however , the performance of pnfbl networks depends on the size of the positive feedback loops ( pfbl ) relative to that of the nfbl in the networks ; the performance of the networks with the bigger pfbl than nfbl is worse than that of the networks with only nfbl .as shown in the table v in the supplimentary information , we realize that a few networks can generate an unstable fixed point in a very small fraction of the parameter space and this dynamical instability affects the performance of those networks tremendously .the noise - induced oscillation can arise with a significantly high signal - to - noise ratio when the dynamical instability is nearby , whose effect is known as the noisy precursors of the nonlinear instabilities [ wiesenfeld ] .for the case of proteolysis , all the networks except three networks , networks 10 , 31 , and 55 , have one locally stable fixed point across all parameter ranges .the networks 10 , 31 , and 55 generate an unstable fixed point with up to 9 pecentage of random sampling from each of four parameter ranges .the networks 10 and 31 outperform the other networks across all parameter ranges whereas the network 55 is in the blue performance group despite having the instabilities inside the parameter sampling ranges .however , the networks from the network architectural group ii ( networks 14 - 39 ) except network 31 do nt exhibit any dynamical instability , but are still capable of generating the noise - induced oscillation with the extraordinarily large values of maximum snr , i.e. , up to in the biologically feasible ranges .for the case of transcriptional control , we find that the presence of instabilities of the networks within the chosen parameter range is nicely correlated with the performance of the networks almost without exception. the sample points leading to instabilities goes up to 9.5% for a few networks , but stay much less than 1% .we consider biochemical reactions in which the chemical species get synthesised and degraded .the chemical species get synthesised constitutively and their synthesis can be enhanced by another chemical species , called activation .most often , the activation occurs in a gene regulation where a protein functioning as a transcription factor binds to the promoter of a target gene and enhances the activity of the gene , increasing the production rate of the target protein .the chemical species get degraded spontaneously and they can be negatively regulated by another chemical species , called repression . in this paper , we consider two repression models : repression by proteolysis and repression by transcription control . repression by proteolysis can be found in protein - protein leading to proteolysis such as ubiquitin - mediated protein - degradation .repression by transcription control can be often found in gene reguation where a protein functioning as a repressor binds to a promoter of a target gene and shut off the activity of that gene , decreasing the synthesis of the target protein . in this paper, we consider an ensemble of three - node directed graphs with the following constraints : ( a ) there can exist at most one directed edge from one node to another .( b ) when a directed edge is present , it can be either inhibition or activation . in graphical representation, is for biochemical species activating species , and is for species inhibiting species .( c ) we only consider networks in which all three nodes are part of some loop .thus networks that have nodes with only in - coming edges or out - going edges or are isolated are not considered .( d ) self - directed edges are not allowed .the exhaustive list of three - node directed graphs under our consideration is presented in the tables [ models ] and [ models2 ] .table [ models ] is for networks with only negative feedback loops while table [ models2 ] is for networks with coupled positive and negative feedback loops .only topologically distintive networks are allowed . if two networks are identical after rotation and/or mirroring , then the two networks are topologically identical. all the graphs can have at most 6 directed edges and the topology of each graph is determined by a distinctive arrangement of the 6 directed edges .thus , we represent each graph with a string of 6 characters .each of the 6 characters can be either `` '' for activation , `` '' inhibition , or `` '' when an edge is absent .the first three characters indicate three directed edges going counterclockwise , one from species to species , another from species to species , and the last one from species to species .the last three characters indicate three directed edges going clockwise , one from species to species , another from species to species , and then the last one from species to species .for example see fig .[ example_net ] for network number 7 .this network consist of three counter - clockwise edges staring from going to ( activation ) , to ( inhibition ) , and to ( activation ) .there are two clockwise edges in this network one going from to ( inhibition ) and to ( inhibition ) , and there is no edge going from to .therefore , we can represent network 7 with a string of ` ' .we represent the rest of the networks in our ensemble by applying the same rules .the networks in our ensemble can be categorized into different groups depending on their underlying topological characteristics such as : 1 ) cyclic negative feedback loops ( nfbls ) ; 2 ) linear negative feedback loops ; 3 ) cyclic positive - negative interlinked feedback loops ( pnfbls ) ; and , 4 ) linear positive - negative interlinked feedback loops .the linear feedback loops are made up of only two component feedback loops , which forms a linear chain of three nodes lying in a row .the cyclic feedback loops always involve three component feedbacks as a backbone structure , which form a triangular closed loop . in table[ models ] , we classified networks 1 - 10 into type i and type ii .type i nfbls have `` aia000 '' as the backbone structure while type ii nfbls have `` iii000 '' as the backbone structure .all type i and type ii nfbls can be created by adding other links to the backbone structures `` aia000 '' and `` iii000 '' , respectively .the cyclic pnfbls can be constructed by adding positive feedback loops in an exhaustive way to the different types of the cyclic nfbl backbone networks .they are termed as the cyclic pnfbls type 1 and type 2 when they are obtained by adding positive feedback loops to nfbls type 1 and type 2 , respectively . in tables [ models ] and [ models2 ], we further indicate the number and types of feedback loops present in the networks in the form `` '' , where . here , indicate the number of negative feedback loops ( n ) while indicate the number of positive feedback loops ( p ) .
according to the chemical reaction network theory , the topology of a certain class of chemical reaction networks , regardless of the kinetic details , sets a limit on the dynamical properties that a particular network can potentially admit ; the structure of a network predetermines the dynamic capacity of the network . we note that stochastic fluctuations can possibly confer a new dynamical capability to a network . thus , it is of tremendous value to understand and be able to control the landscape of stochastic dynamical behaviors of a biochemical reaction network as a function of network architecture . here we investigate such a case where stochastic fluctuations can give rise to the new capability of noise - induced oscillation in a subset of biochemical reaction networks , the networks with only three biochemical species whose reactions are governed by mass action kinetics and with the coupling of positive and negative feedback loops . we model the networks with master equations and approximate them by using linear noise approximation . for each network , we read the signal - to - noise ratio value , an indicator of amplified and coherent noise - induced oscillation , from the analytically derived power spectra . we classify the networks into three performance groups based on the average values of signal - to - noise ratio and robustness . we identify the common network architecture among the networks belonging to the same performance group , from which we learn that the coupling of negative and positive feedback loops generally enhance the noise - induced oscillation performance better than the negative feedback loops alone . the performance of networks also depends on the relative size of the positive and negative feedback loops ; the networks with the bigger positive and smaller negative feedbacks are much worse oscillators than the networks with only negative feedback loops .
in , a very general model for sparse random graphs was introduced , corresponding to an inhomogeneous version of , and many properties of this model were determined , in particular , the critical point of the phase transition where the giant component emerges .part of the motivation was to unify many of the new random graph models introduced as approximations to real - world networks .indeed , the model of includes many of these models as exact special cases , as well as the ` mean - field ' simplified versions of many of the more complicated models .( the original forms are frequently too complex for rigorous mathematical analysis , so such mean - field versions are often studied instead . ) unfortunately , there are many models with key features that are not captured by their mean - field versions , and hence not by the model of .the main problem is that many real - world networks exhibit _ clustering _ : for example , while there are vertices and only edges , there may be triangles , say .in contrast , the model of , like , produces graphs that contain essentially no triangles or short cycles .most models introduced to approximate particular real - world networks turn out to be mathematically intractable , due to the dependence between edges . nevertheless , many such models have been studied ; as this is not our main focus , let us just list a few examples of early work in this field .one of the starting points in this area was the ( homogeneous ) ` small - world ' model of watts and strogatz .another was the observation of power - law degree sequences in various networks by faloutsos , faloutsos and faloutsos , among others .of the new inhomogeneous models , perhaps the most studied is the ` growth with preferential attachment ' model introduced in an imprecise form by barabsi and albert , later made precise as the ` lcd model ' by bollobs and riordan .another is the ` copying ' model of kumar , raghavan , rajagopalan , sivakumar , tomkins and upfal , generalized by cooper and frieze , among others . for ( early ) surveys of work in this fieldsee , for example , barabsi and albert , dorogovtsev and mendes , or bollobs and riordan .roughly speaking , any sparse model with clustering must include significant dependence between edges , so one might expect it to be impossible to construct a general model of this type that is still mathematically tractable .however , it turns out that one can do this .the model that we shall define is essentially a generalization of that in , although we shall handle certain technicalities in a different way here . throughout this paperwe use standard graph theoretic notation as in .for example , if is a graph then denotes its vertex set , its edge set , the number of vertices , and the number of edges .we also use standard notation for probabilistic asymptotics as in : a sequence of events holds _ with high probability _ , or _ whp _ , if as .if is a sequence of random variables and is a deterministic function , then means , where denotes convergence in probability .let us set the scene for our model . by a _ type space _ we simply mean a probability space .often , we shall take ] with lebesgue measure .sometimes we consider finite .as will become clear , any model with finite can be realized as a model with type space ] .hence , when it comes to proofs , we lose no generality by taking ] are likely to be useful for geometric applications , as in .let consist of one representative of each isomorphism class of finite connected graphs , chosen so that if has vertices then =\{1,2,\ldots,{r}\} ] .first let be i.i.d .( independent and identically distributed ) with the distribution .given , construct as follows , starting with the empty graph .for each and each with , and for every -tuple of distinct vertices ^{r} ] is a permutation such that if and only if , then we assume that for all . in the poisson version , orif we add copies of graphs with probability , the correction terms in and its generalizations disappear : in the edge - only case , given , vertices and are joined with probability , and in general we obtain exactly the same random graph if we symmetrize each with respect to .for any kernel family , let be the corresponding _ edge kernel _, defined by where the second sum runs over all ordered pairs with , and we integrate over all variables apart from and . note that the sum need not always converge ; since every term is positive this causes no problems : we simply allow for some . given and , the probability that and are joined in is at most , and this upper bound is typically quite sharp .for example , if is bounded in the sense of definition [ dbounded ] below , then the probability is . in other words, captures the edge probabilities in , but not the correlations . before proceeding to deeper properties ,let us note that the expected number of added copies of is .unsurprisingly , the actual number turns out to be concentrated about this mean .let be the _asymptotic edge density _ of .since every copy of contributes edges , the following theorem is almost obvious , provided we can ignore overlapping edges .a formal proof will be given in section [ sec_subgraphs ] .( a similar result for the total number of atoms is given in lemma [ badv ] . ) [ tedges ] as , converges in probability to the asymptotic edge density . in other words , if then , and if then , for every constant , we have whp .moreover , as in , our main focus will be the emergence of the giant component . by the _ component structure _ of a graph , we mean the set of vertex sets of its components , i.e. , the structure encoding only which vertices are in the same component , not the internal structure of the components themselves .when studying the component structure of , the model can be simplified somewhat .recalling that the atoms are connected by definition , when we add an atom to a graph , the effect on the component structure is simply to unite all components of that meet the vertex set of , so only the vertex set of matters , not its graph structure .we say that is a _ clique kernel family _ if the only non - zero kernels are those corresponding to complete graphs ; the corresponding random graph model is a _ clique model_. for questions concerning component structure , it suffices to study clique models . for clique kernels we write for ;as above , we always assume that is symmetric , here meaning invariant under all permutations of the coordinates of . given a general kernel family , the corresponding ( symmetrized ) clique kernel family is given by with where denotes the symmetric group of permutations of ] , define by and let ( the factors in and in the definition of are unfortunate consequences of our choice of normalization . ) let be a particle of in generation with type , and suppose that each particle in generation of type has some property with probability , independently of the other particles .given a child clique of , the bracket in the definition of expresses the probability that one or more of the corresponding child particles has property .hence is the expected number of child cliques containing a particle with property , and , from the poisson distribution of the child cliques , is the probability that there is at least one such clique , i.e. , the probability that at least one child of has property .let denote the survival probability of the branching process , and the survival probability of .assuming for the moment that the function ] to see lemma [ l_max ] below . from the definitions of and ,it is immediate that in our analysis we shall also consider the linear operator defined by where is defined by . for a hyperkernel ( which is the only type of kernel family for which we define the branching process ) , we have from which it is easy to check that is the linearized form of : more precisely , is obtained by replacing by in the definition of .let us note two simple consequences of this fact .for any sequence in } ] .also , if and only if . since the integral of a non - negative function is positive if and only if the function is positive on a set of positive measure , it follows that for any ] is any other solution to , then holds for every .let be the probability that survives for at least generations , so is identically .conditioned on the set of child cliques , and hence children , of the root , each child of type survives for further generations with probability .these events are independent for different children by the definition of the branching process , so .the result follows from the monotonicity of and the fact that , noting that for the strict inequality .let us remark for the last time on the measurability of the functions we consider : in the proof above , is measurable by definition . from the definition of and the measurability of each , it follows by induction that each is measurable , and hence that is .similar arguments apply in many places later , but we shall omit them .we next turn to the uniqueness of the non - zero solution ( if any ) to .the key ingredient in establishing this is the following simple inequality concerning the non - linear operator .[ l_fsg ] let be an integrable hyperkernel , and let and be measurable functions on with .then we may write as , where is the non - linear operator corresponding to the single kernel , so is defined by the summand in . it suffices to prove that we shall in fact show that for any ( distinct ) we have since is symmetric , follows .( in fact , can be true in general only if always holds , considering the symmetrization of a delta function . )now can be viewed as an inequality in variables .this inequality is linear in each variable .furthermore , it is linear in each pair , . in proving for any , we may thus assume that for each one of three possibilities holds : , , or and .in other words , we may assume that and are -valued .suppose then for a contradiction that fails for some -valued and with . then there must be some permutation such that which we may take without loss of generality to be the identity permutation .since both sides of are -valued , the left must be and the right .since the left is , we have , so , using , .but now for the right hand side of to be 0 the final product in must be , so for , i.e. , takes the value only once . of course , must take the value at least twice , otherwise we have equality .but now the left hand side of is exactly , coming from terms with and hence .the right hand side is at least , from any mapping to some with .hence holds after all , giving a contradiction and completing the proof .if is reducible , then may in general have several non - zero solutions . to prove uniqueness in the irreducible case we need to know what irreducibility tells us about .[ l_red2 ] if there exists a measurable ] , not a.e . , such that for some , then .the proof is the same as that of lemma 5.13 in , using in place of .the next step is to show that if , then there is a function with the property described in lemma [ l_fup ] . in we did this by considering a bounded kernel .here we have to be a little more careful , as we are working with the non - linear operator rather than with ; this is no problem if we truncate our kernels suitably .[ dbounded ] we call a hyperkernel _ bounded _ if two conditions hold : only finitely many of the are non - zero , and each is bounded .similarly ( for later use ) , a general kernel family is _ bounded _ if only finitely many of the are non - zero , and each is bounded . in other words , is bounded if there are constants and such that for , and is pointwise bounded by for .note that if is bounded , then the corresponding edge kernel is bounded in the usual sense . given a hyperkernel , for each we let be the bounded hyperkernel obtained from by truncating each , , at , and replacing by a zero kernel for .thus the truncation of a general kernel family is defined similarly , replacing the condition by .[ l_ef ] if then there is a and an ] constructed according to the same rules as , except that instead of adding a we add a hyperedge with vertices .in fact , we consider the poisson version of the model , allowing multiple copies of the same hyperedge .let be a bounded hyperkernel , and let be a corresponding upper bound , so is the constant kernel for , and zero for , while holds pointwise for all .taking , as usual , our vertex types to be independent , each having the distribution , we construct coupled random ( multi-)hypergraphs and on ] be chosen uniformly at random , independently of and .let denote the -neighbourhood of in , and that in .counting the expected number of cycles shows that for any fixed , the hypergraph is whp treelike .furthermore , standard arguments as for show that one may couple and the first generations of so as to agree in the natural sense whp .when is treelike , then may be constructed using exactly the same random deletion process that gives ( the first generations of ) as a subset of .it follows that and the first generations of may be coupled to agree whp . recalling that and have the same components , for any fixed one can determine whether the component containing has exactly vertices by examining . writing for the number of vertices of a graph that are in components of size , it follows that as in , starting from two random vertices easily gives a corresponding second moment bound , giving convergence in probability .[ nkbdd ] let be a bounded hyperkernel .then for any fixed .of course it makes no difference whether we work with or : lemma [ nkbdd ] also tells us that the extension to arbitrary hyperkernels is easy from theorem [ tappc ] .[ nkint ] let be an integrable hyperkernel .then for each fixed we have as in , we simply approximate by bounded hyperkernels .for let be the truncated hyperkernel defined by .let be fixed , and let be arbitrary . from monotone convergence and integrability , for large enough we have say . by theorem [ tappc](i ) , increasing if necessary , we may also assume that since holds pointwise , we may couple the hypergraphs and associated to and so that . recall that is produced from by replacing each hyperedge with vertices by an -clique .however , as noted earlier , if we form from by replacing each by any connected simple graph on the same set of vertices , then and will have exactly the same component structure , and in particular .let us form and in this way from and , replacing any hyperedge with vertices by some tree on the same set of vertices . recalling that ,we may of course assume that .writing for the number of -vertex hyperedges in a hypergraph , hence , recalling that and noting that adding one edge to a graph can not change by more than , we see that with probability at least we have applying lemma [ nkbdd ] ( or rather ) to the bounded hyperkernel , we have . usingit follows that when is large enough , with probability at least , say , we have .since was arbitrary , we thus have as required .the local coupling results of the previous section easily give us the ` right ' number of vertices in large components . as usual, we will pass from this to a giant component by using the ` sprinkling ' method of erds and rnyi , first uncovering the bulk of the edges , and then using the remaining ` sprinkled ' edges to join up the large components .the following lemma gathers together the relevant consequences of the results in the previous section .[ l_easypart ] let be an integrable hyperkernel , and let . then , given any , there is a and a function such that holds whp , where .from lemma [ nkint ] we have for each fixed .since as , it follows that for some we have we may and shall assume that . since , the first statement of the lemma follows. for the second , we may of course assume that ; otherwise , there is nothing to prove . as , from theorem [ tappb](i )we have .fix with , and let . applying to , there is some such that which implies . in the light of lemma [ l_easypart ] , and writing for , to prove theorem [ th1 ] it suffices to show that if is irreducible , then for any we have whp ; then as required . also , from and the fact that , we obtain as claimed .since , there is a natural coupling of the graphs and appearing in lemma [ l_easypart ] in which always holds .our aim is to show that , whp , in passing from to , the extra ` sprinkled ' edges join up almost all of the vertices of in ` large ' components ( those of size at least ) into a single component .unfortunately , we have to uncover the vertex types before sprinkling , so we do not have the usual independence between the bulk and sprinkled edges .a similar problem arose in bollobs , borgs , chayes and riordan in the graph context , as opposed to the present hypergraph context .it turns out that we can easily reduce to the graph case , and thus apply a lemma from .this needs a little setting up , however . hereit will be convenient to take ] . following frieze and kannan , the _ cut norm _ of is defined by }\left| \int_{s\times t } f(x , y){\,d}x{\,d}y\right|,\ ] ] where the supremum is taken over all pairs of measurable sets .note that , since the integral above is bounded by ^ 2}|f| ] and a measurable function \to [ 0,1] ] with full measure ; it makes no difference . )we write if is a rearrangement of . given two kernels , on ] , .there is also a sparse random graph associated to ; this is the graph on ] , and and positive constants .there is a constant such that whenever is a sequence of symmetric matrices with entries in ] with , where denotes the event that contains a path starting in and ending in . in fact , this lemmais not stated explicitly in , but this is exactly the content of the end of section 3 there ; for an explicit statement and proof of ( a stronger version of ) this lemma see ( * ? ? ?* lemma 2.14 ) .we shall apply lemma [ l_bbcr ] to graphs corresponding to ( subgraphs of ) , where is as in lemma [ l_easypart ] . to achieve independence between edges , we shall simply take only one edge from each hyperedge . unfortunately , the problem of conditioning on the still remains ; we shall return to this shortly .let be an integrable hyperkernel and let be the poisson ( multi-)hypergraph corresponding to .given the sequence , let be the random ( multi-)graph formed from by replacing each -vertex hyperedge by a single edge , chosen uniformly at random from the edges corresponding to .with fixed , the numbers of copies of each edge in are independent poisson random variables . from basic properties of poisson processes, it follows that , with fixed , the number of copies of each edge in are also independent poisson random variables .our next aim is to calculate the edge probabilities in . as usual , we write for the _ falling factorial _ . given and distinct ] , and let be the -by- matrix with entries for and if .with given , the expected number of -vertex hyperedges in containing is .hence the expected number of edges in is exactly .now clearly depends on and .unfortunately , it also depends on all the other .the next lemma will show that the latter dependence can be neglected .set and let be the ` re - scaled ' edge kernel defined by comparing with the formula for , note that we have divided each term in the sum in by , the number of edges in . note that recall that and depend on the random sequence . in the next lemma ,the expectation is over the random choice of ; no graphs appear at this stage .[ la ] let be an integrable hyperkernel .then for every , and we have suppose first that is bounded .let where the sum again runs over all sequences of distinct indices in \setminus{\ensuremath{\{i , j\}}} ] , with lebesgue measure .let be an irreducible , integrable kernel family , let be the corresponding hyperkernel , given by , and let .as noted after lemma [ l_easypart ] , in the light of this lemma , it suffices to prove the lower bound on .we may and shall assume that and , say .let and be as in lemma [ l_easypart ] , and let , and be the poisson multi - hypergraphs associated to the hyperkernels , and , respectively . using the same vertex types for all three hypergraphs , there is a natural coupling in which , with and _ conditionally _ independent given .define and by and , respectively , starting from the integrable hyperkernel .note that is a kernel on ] associated to a matrix . since , it follows that , and hence that .coupling the random sequences for different appropriately , we may and shall assume that almost surely .since is a bounded kernel on ] ) such that each is measurable , the restriction of to is irreducible ( in the natural sense ) , and , apart from a measure zero set , is zero off . suppressing the dependence on ,let be the subgraph of induced by the vertices with types in . since the vertex types are i.i.d ., the probability that contains any edges other than those of is 0 .now has a random number of vertices , with a binomial distribution , which is concentrated around its mean .given , the graph is another instance of our model .let , so that . from the remarks above it is easy to check that theorem [ th1 ] gives and ; we omit the details . sorting the into decreasing order , it follows that for each fixed ( finite ) , in particular , for and .one of the most studied features of the various inhomogeneous network models is their ` robustness ' under random failures , and in particular , the critical point for site or bond percolation on these random graphs .for example , this property of the barabsi albert model was studied experimentally by barabsi , albert and jeong , heuristically by callaway , newman , strogatz and watts ( see also ) and cohen , erez , ben - avraham and havlin , and rigorously in . in the present context , given , we would like to study the random subgraphs and }}(n,{{\undertilde{{\kappa}}}}) ] .( for precise statements , see ( * ? ? ?* section 4 ) . ) here , the situation is a little more complex . when we delete edges randomly from , it may be that what is left of a particular atom is disconnected .this forces us to consider _ generalized kernel families _ with one kernel for each , where the set consists of one representative of each isomorphism class of finite ( not necessarily connected ) graphs . rather than present a formal statement ,let us consider a particular example .suppose that is the generalized kernel family with only one kernel , corresponding to the disjoint union of and .let be the kernel family with two kernels , corresponding to and for .then and are clearly very similar ; the main differences are that contains exactly the same number of added triangles and , whereas in the numbers are only asymptotically equal , and that in a triangle and a added in one step are necessarily disjoint . since almost all pairs of triangles and in are disjoint anyway , it is not hard to check that and are ` locally equivalent ' , in that the neighbourhoods of a random vertex in the two graphs can be coupled to agree up to a fixed size whp . more generally , given a generalized kernel family , let be the kernel family obtained by replacing each kernel by one kernel for each component of , obtained by integrating over variables corresponding to vertices of as above. this may produce several new kernels for a given connected ; we of course simply add these together to produce a single kernel .note that so if is integrable , then so is . although and are not exactly equivalent , the truncation and local approximation arguments used to prove theorem [ th1 ] carry over easily to give the following result .[ th_disc ] let be a generalized kernel family , let be the corresponding kernel family as defined above , and let be the hyperkernel corresponding to , defined by . if is irreducible , then and .note that the hyperkernel corresponding to is obtained by replacing each ( now connected , as before ) atom by a clique ; this corresponds to replacing each _ component _ of an atom in by a clique . turning to bond percolation on ,i.e. , to the study of the random subgraph of , let be the kernel family obtained by replacing each kernel by kernels , one for each spanning subgraph of .( as before , we then combine kernels corresponding to isomorphic graphs . )working work with the poisson multigraph formulation of our model , the graphs and have exactly the same distribution .this observation and theorem [ th_disc ] allow us ( in principle , at least ) to decide whether has a giant component , i.e. , to find the critical point for bond percolation on .let us illustrate this with the very simple special case in which each kernel , , is constant , say .we assume that is integrable , i.e. , that . in this caseeach kernel making up is also constant , and the same applies to the hyperkernel corresponding to .hence , from the remarks above and , has a giant component if and only if the asymptotic edge density of the hyperkernel is at least .since we obtain by first taking random subgraphs of our original atoms , and then replacing each component by a clique , we see that where is the expected number of unordered pairs of distinct vertices of that lie in the same component of the random subgraph of obtained by keeping each edge with probability , independently of the others .alternatively , where is the _ susceptibility _ of , i.e. , the expected size of the component of a random vertex of .if we have only a finite number of non - zero , then may be evaluated as a polynomial in , and the critical point found exactly . turning to site percolation ,there is a similar reduction to another instance of our model , most easily described by modifying the type space .indeed , we add a new type corresponding to deleted vertices , and set . setting for , we obtain a probability measure on . replacing each kernel by kernels on defined appropriately ( with corresponding to the subgraph of spanned by the non - deleted vertices ), one can show that }}(n,{{\undertilde{{\kappa}}}}) ] ; we omit the mathematically straightforward but notationally complex details .heuristically , the vertex degrees in can be described as follows .consider a vertex and condition on its type .the number of atoms that contain then is asymptotically poisson with a certain mean depending on and .however , each atom may add several edges to the vertex , and thus the asymptotic distribution of the vertex degree is compound poisson ( see below for a definition ) .moreover , this compound poisson distribution typically depends on the type , so the final result is that , asymptotically , the vertex degrees have a mixed compound poisson distribution . in this sectionwe shall make this precise and rigorous .we begin with some definitions .if is a finite measure on , then , the _ compound poisson distribution with intensity _ , is defined as the distribution of , where are independent poisson random variables .equivalently , is the distribution of the sum of the points of a poisson process on with intensity , regarded as a multiset .( the latter definition generalizes to arbitrary measures on such that , but we consider in this paper only the integer case . )since has probability generating function , has probability generating function whenever this is defined , which it certainly is for .if is a random finite measure on , then denotes the corresponding _mixed compound poisson distribution_. from now on , for each , will be a finite measure on , depending measurably on .we shall write for the corresponding random measure on , obtained by choosing from according to the distribution and then taking . thus is defined by the point probabilities or , equivalently , the probability generating function [ rcpo ] since we have assumed that is a finite measure , ; thus a.s . and only finitely many are non - zero , whence a.s .this verifies that is a proper probability distribution . on the other hand ,the mean of is which may be infinite . as a consequence , let denote the total variation distance between two random variables , or rather their probability distributions , defined by where the supremum is taken over all measurable sets . we shall use the following trivial upper bound on the total variation distance between two compound poisson distributions .[ lb ] if and are two finite measures on , then let be as above and let be another family of independent poisson variables .we can easily couple the families so that for every .then given an integrable kernel family and , and ] , and let be the degree of . for with and , let be the number of added copies of that contain with corresponding to vertex in .let this is the number of edges added to , including possible repetitions . thus unless two added edges with endpoint coincide . for any other vertex , conditioned on the types ,the number of atoms containing both and is a sum of independent bernoulli variables , for in some index set . for each are such variables , each with .hence , since there are possible choices for , it follows that hence , in proving [ tda ] , it makes no difference whether we work with or with , i.e. , with the multi - graph or simple graph version of .conditioned on , is a sum of independent bernoulli variables for in some index set , with given by and .let . by a classical poisson approximation theorem ( see ( * ? ? ?* ( 1.8 ) ) ) , ( this follows easily from the elementary ; see e.g. ( * ? ? ? * and theorem 2.m ) for history and further results . )furthermore , given , the random variables are independent , and thus and imply that if are independent , then since has a compound poisson distribution with intensity , we have by and lemma [ lb ] , this yields in particular , for every , taking in , taking the expectation of both sides , and noting that and , we find that we shall show that the final term is small . by , with , where the sum runs over all sequences of distinct elements in ] let be the degree of in , and the indicator } ] .we say that a set of connected graphs forms a _ tree decomposition _ of each is connected , the union of the is exactly , any two of the share at most one vertex , and the intersect in a tree - like structure .the last condition may be expressed by saying that the may be ordered so that each other than the first meets the union of the previous ones in exactly one vertex .equivalently , the intersection is tree - like if .equivalently , defining ( as usual ) a _ block _ of a graph to be either a maximal 2-connected subgraph of or a bridge in , forms a tree composition of if each is a connected union of one or more blocks of , with each block contained in exactly one .( cf . . )note that we allow , in which case .for , the order of the factors is irrelevant , so , for example , has a unique non - trivial tree decomposition , into two edges .note also that if is 2-connected , then it has only the trivial tree decomposition .let us say that a copy of in if _ regular _ if it is the union of graphs forming a tree decomposition of , where each arises directly as a subgraph of some atom , and for all ( with this intersection containing at most one vertex ) .we can write down exactly the probability that contains a regular copy of with vertex set in terms of certain integrals of products of conditional expectations .we shall not do so .instead , let where the sum runs over all tree decompositions of and each term is evaluated at the subset of corresponding to the vertices of , and set note that these definitions extend to disconnected graphs , taking the sum over all combinations of one tree decomposition for each component of .the upper bound easily implies that the expected number of regular copies of in is at most , and furthermore this bound is correct within a factor if is bounded ; the factor appears because there are potential copies of .note that the number of embeddings of a graph into a graph , i.e. , the number of injective homomorphisms from to , is simply .hence is the appropriate normalization for counting embeddings of into rather than copies of . in other contexts ,when dealing with dense graphs , it turns out to be most natural to consider homomorphisms from to , the number of which will be very close to .thus the normalization in is standard in related contexts .( see , for example , lovsz and szegedy . ) let us illustrate the definitions above with two simple examples .[ etk2 ] the simplest case is . in this case, there is only the trivial tree decomposition , and and yield [ etp2 ] suppose that contains only two non - zero kernels , , corresponding to an edge , and , corresponding to a triangle ; our aim is to calculate in this case , where is the path with 2 edges . using symmetry of and , while reflecting the fact the appears directly in if and only if we added a triangle with vertex set , and this vertex set corresponds to 6 -tuples .since , it follows that more generally , let be any ( simple ) subgraph of with components .( we abuse notation by now writing for a specific subgraph of , rather than an isomorphism class of graphs . )let list all atoms contributing edges of , and let , where we take the intersection in the multigraph sense , i.e. , intersect the edge sets .for example , if and are parallel edges in forming a double edge from to , and , , then contains no edge , even though and each do so . by definitioneach contains at least one edge , and is the edge - disjoint union of the .since has components , when adding the one by one , at least times a new component is _ not _ created , so at least times at least one vertex of , and hence of , is repeated .it follows that extending our earlier definition , we call _ regular _ if equality holds in , and _ exceptional _ otherwise .note that if any is disconnected , then is exceptional .let denote the number of regular copies of in , and the number of exceptional copies .[ th_ssbd ] let , where is a kernel family , and let be a graph with components .then if is bounded , then and we have essentially given the proof of the first statement , so let us just outline it . to construct a regular copy of in we must first choose graphs on forming a tree decomposition of each component of . then we must choose a graph containing each to be the atom that will contain . then we must choose distinct vertices from to be the vertices of the , where ( since is regular ) , we have .note that there are choices for the vertices .( we are glossing over the details of the counting , and in particular various factors for various graphs .it should be clear comparing the definition of with what follows that these are in the end accounted for correctly . ) given the vertex types , the probability that these particular graphs arise is then ( up to certain factors ) a product of factors of the form , where the kernel is evaluated at an appropriate subset of .note that the overall power of in the denominator is . integrating out over the variables to , and summing over all , the factor becomes a factor . finally , integrating out over the remaining variables ,corresponding to vertices of , and summing over decompositions , we obtain as an upper bound .if is bounded then the number of vertices appearing above is bounded , so , where the error term is uniform over all choices for .it follows that in this case , arguing similarly for exceptional copies , the power of in the denominator is now at least , and it follows that if is bounded , then as claimed .it follows that finally , for the variance we simply note that is the expected number of ordered pairs of not necessarily disjoint copies of in .if and share one or more vertices , then has at most components . from , the expected number of such pairs is .the expected number of pairs with and disjoint is simply , where is the disjoint union of two copies of and is a symmetry factor , the number of ways can be divided into 2 copies of .( if is connected then simply and in general , if has distinct components with multiplicities , then . ) since , we have , so gives from which the variance bound follows .the final bound follows by chebyshev s inequality .for bounded kernel families , theorem [ th_ssbd ] is more or less the end of the story , although one can of course prove more precise results . for unbounded kernel familiesthe situation is much more complicated .let us first note that regular copies of do not give rise to any problems .[ th_reg ] let be a kernel family and a connected graph , and let . then .in other words , if , then for any constant , whp , while if , then .we consider the truncated kernel families . since is a sum of integrals of products of sums of integrals of the kernels , by monotone convergence we have as , and hence .if , choose so that , and couple and in the natural way so that .since is bounded , theorem [ th_ssbd ] implies that whp .since , the result follows .if , then given , the truncation argument above shows that holds whp . by the first statement of theorem [ th_ssbd ] , . combining these two bounds gives the result .note that we do not directly control the variance of ; as we shall see in section [ sec_pl ] , there are natural examples where is concentrated about its finite mean even though its variance tends to infinity .the very simplest case of theorem [ th_reg ] concerns edges ; we stated this as a separate result in the introduction . since all copies of in are regular ( and direct ) , , and taking in theorem [ th_reg ] and using yields , which is the first claim of theorem [ tedges ] .it remains to show that .the lower bound follows from the first part , since convergence in probability implies , while theorem [ th_ssbd ] gives , completing the proof .it is also easy to prove theorem [ tedges ] directly , using truncations as in this section but avoiding many complications present in the general case .by a _ moment _ of a kernel family we shall mean any integral of the form where are not necessarily distinct , and each term is evaluated at some -tuple of distinct .the proof of theorem [ th_ssbd ] shows that for any connected , is bounded by a sum of moments of .this gives a very strong condition under which we can control .[ th_mf ] let be a kernel family in which only finitely many kernels are non - zero .suppose also that all moments of are finite .then for any connected , , and the conclusions of theorem [ th_reg ] apply with replaced by .this is essentially trivial from the comments above and theorem [ th_reg ] .we omit the details .example [ badp2 ] shows that some conditions are necessary to control ; we refer the reader to for the description of the kernel family in this case . plugging into ,in this case we have for some constant ( in fact , ) , and it easily follows that .however , as shown in the discussion of that example , whp there is a vertex with degree at least , and hence at least copies of , which is much larger than if . in this casethe problem is exceptional arising from atoms and : the corresponding moment is infinite , due to the contribution from . of course, not all moments contribute to ; as we shall see in the next section , it is easy to obtain results similar to theorem [ th_mf ] under weaker assumptions in special cases .also , in general it may happen that has infinite expectation ( in the multigraph form ) , but is nonetheless often small , i.e. , that the large expectation comes from the small probability of having a vertex in very many copies of .much more generally , it turns out that when is integrable , whp all exceptional copies of sit on a rather small set of vertices .[ th_ess ] let be an integrable kernel family and a connected graph , with finite .let . for any , there is a such that whp _ every _ graph formed from by deleting at most vertices has .for any and any , whp there is _ some _ graph formed from by deleting at most vertices such that .together the statements above may be taken as saying that contains essentially copies of , where ` essentially ' means that we may ignore vertices .in other words , the ` bulk ' of contains this many copies of , though a few exceptional vertices may meet many more copies .we start with the second statement , since it is more or less immediate . indeed , writing for , and considering truncations as usual , from monotone convergence we have as .let , and be given .since is integrable , i.e. , , there is some such that .coupling and in the usual way , let us call a vertex _ bad _ if it meets an atom present in but not .the expected number of bad vertices is at most the expected sum of the sizes of the extra atoms , which is at most .hence the probability that there are more than bad vertices is at most . deleting all bad vertices from leaves a graph with at most copies of . applying theorem [ th_ssbd ], this number is at most whp , so we see that if is large enough , then with probability at least we may delete at most vertices to leave with at most copies of , as required . turning to the first statement, we may assume without loss of generality that is bounded .indeed , there is some truncation with , and taking as usual , it suffices to prove the same statement for with replaced by .assuming is bounded , then by theorem [ th_ssbd ] we have whp , so it suffices to prove that if is bounded and , then there is some such that whp any vertices of meet at most copies of .let be a fixed vertex of , and for let denote the number of homomorphisms from to mapping to vertex .let be the graph formed from two copies of meeting only at .then there are exactly homomorphisms from to mapping to , so in total there are homomorphisms from to . nowthe image of any homomorphism from to is a connected subgraph of , and each such subgraph is the image of homomorphisms .applying theorem [ th_ssbd ] to each of the possible isomorphism types of , it follows that there is some constant such that , whp , when the upper bound holds , given any set ] with lebesgue measure .our kernel family has only two non - zero kernels , , corresponding to edges , and to triangles , with and we could of course consider many other possible functions , but these seem the simplest and most natural for our purposes. it would be straightforward to carry out computations such as those that follow with each of the above replaced by a different constant , for example , although we should symmetrize the kernels in this case . however , one of these exponents would determine the power law , and it seems most natural to take them all equal . for convenience ,we define in particular , .we then have so is integrable .also , for the asymptotic edge density in theorem [ tedges ] , in the following subsections we apply our general results to determine various characteristics of this particular random graph . from and symmetry of and see that while for , since an edge contributes 1 to the degree of each endvertex , while a triangle contributes 2 to the degrees of its vertices , for each , the measure defined by is given by theorem [ tdegree ] then tells us that the degree distribution of converges to the mixed compound poisson distribution , where is the random measure corresponding to with chosen uniformly from ] , for any we have so the distribution of has a power - law tail . using the concentration properties of poisson distributions with large means , arguing as in the proof of corollary 13.1 of , it follows easily that as , so the asymptotic degree distribution does indeed have a power - law tail with ( cumulative ) exponent .let , so by theorem [ tdegree ] , the asymptotic fraction of vertices with degree is simply .if then it is not hard to check that in fact as , where , so the degree distribution is power - law in this stronger sense . if , then if is odd , but still holds for even , for a different ( doubled ) constant . from , we have , which we may rewrite as , where by theorems [ th1 ] and [ th2 ] , the largest component of is of size , and there is a giant component , i.e. , , if and only if . in this case is ` rank 1 ' in the terminology of , and we have hence , fixing and thus and , there is a giant component if and only if turning to the normalized size of the giant component , theorem [ unique ] allows us to calculate this in terms of the solution to a functional equation .usually this is intractable , but for the special we are considering this simplifies greatly , as in the rank 1 case of the edge - only model ; see section 16.4 of , or section 6.2 of . indeed , writing for the survival probability of , from we have which simplifies to where by lemma [ l_max ] , we have , so although we defined in terms of , we can view as an unknown constant , define by , and substitute back into .the function then solves if and only if solves and every solution to arises in this way .in particular , by theorems [ unique ] and [ th2 ] , there is a positive solution only in the supercritical case ( when holds ) , and that solution is then unique ; is always a solution . transforming the integral using the substitution , one can rewrite the right hand side of in terms of an incomplete gamma function , although it is not clear this is informative .the point is that the form of is given by , and the constant can in principle be found as the solution to an equation , and can very easily be found numerically for given values of , and . in the following subsectionswe shall need expressions for for various small graphs , where , defined by and , may be thought of as the asymptotic density of copies of in the kernel family .we start with direct copies of . since all atoms are edges or triangles , the only graphs that can be produced directly are edges , triangles , and , i.e. , paths with 2 edges .putting the specific kernels and into the formulae and from the previous section , we have and while edges may be formed only directly , so either from and or from , we have which agrees , as it should , with .since a triangle is 2-connected , it has no non - trivial tree decomposition , and and give which may also be seen by noting that the only regular copies of a triangle are those directly corresponding to . a copy of may be formed by a single triangular atom ( a direct copy ) , but may also be formed by two edges from different atoms .hence , as in example [ etp2 ] , in particular , if then is infinite . for , the star with three edges ,there are two types of tree - decompositions : three edges or one edge and one copy of , the latter occurring in 3 different ways .( there are no direct copies . )hence , and thus finally , for , there are again two types of tree - decompositions : three edges or one edge and one copy of , the latter now occurring in 2 different ways .hence , as we shall see , the counts above are enough to calculate two more interesting parameters of the graph .the _ clustering coefficient _ of a graph was introduced by watts and strogatz as a measure of the extent to which neighbours of a random vertex in tend to be joined directly to each other .after the degree distribution , it is one of the most studied parameters of real - world networks . as discussed in , for example, there are several different definitions of such clustering coefficients .one of these turns out to be most convenient for mathematical analysis , and is also very natural ; following , we call this coefficient .( hopefully there will be no confusion with our earlier use of for the number of vertices in the 2nd largest component . )the coefficient may be defined as a certain weighted average of the ` local clustering coefficients ' at individual vertices , but is also simply given by a ratio that is easily seen to lie between and .now from above we have .hence , by theorem [ th_reg ] , where , as usual , .we shall return to exceptional copies of shortly . if then is infinite , and will whp contain more than copies of .note that this is to be expected given the exponent of the asymptotic degree distribution , since in this case the expected square degree is infinite . from now onwe suppose that , so is finite .suppose for the moment that exceptional copies of and are negligible , i.e. , that by theorem [ th_reg ] , we have and , so it follows that where , from the formulae in subsection [ ss_counts ] , with given by .it follows that with the degree exponent fixed , this special case of our model can achieve any possible value of the clustering coefficient , with the trivial exception of ( achieved only by graphs that are vertex disjoint unions of cliques ) .indeed , for any , while taking we have which is decreasing as a function of , and tends to as and to as .let us note in passing that by theorem [ th_reg ] , if then is concentrated around its finite mean even though its variance , which involves the expected 4th power of the degree of a random vertex , tends to infinity .so far we considered only regular copies of and ; we now turn our attention to exceptional copies . unfortunately , for any , some moment of our kernel is infinite , so theorem [ th_mf ] does not apply .however , it is easy to describe the set of moments relevant to the calculation of for the graphs we consider .suppose that is an exceptional triangle ( or ; the argument is then almost identical ) in . since has ( at most )three edges , there are at most 3 atoms contributing edges to .let be the union of these atoms , considered as a multigraph .for example , if is the triangle , then might consist of the union of the three triangles , , and .in some sense this will turn out to be the ` worst ' case .let us fix the isomorphism type of , defined in the obvious way .let be the total number of vertices in , and write for the ` redundancy ' of . since is exceptional , .the expected number of exceptional arising in this way is exactly times a certain integral of products of and .from the form of and , we may write this as where is the number of the atoms that contain the vertex of . the initial factor is at most , while the integral is finite unless for some .since is made up of at most 3 atoms , we always have , so if then the relevant integrals ( i.e. , the relevant moments ) are finite , and we have , which certainly implies .in fact , we do not need to assume that .suppose that .then in the multigraph version of the model , .( consider , for example , three triangles sitting on 4 vertices as above . ) on the other hand , this does not mean that is often large . indeed ,when we choose our vertex types uniformly from ] and ^ 3 ] is the indicator function of the event . for this kernel ( family ) } , \\ { { \xi}({{\undertilde{{\kappa}}}})}&=\int{\kappa}_2=2apq , \\ \sigma_{k_2}(x , y)&=2{\kappa}_2(x , y)={{{\kappa}_{\mathrm e}}}(x , y)=2a{\boldsymbol1[x\neq y]}.\end{aligned}\ ] ] expanding the integrals as sums, it follows that substituting these expressions into and simplifying , we find that hence , and we have disassortative mixing as soon as , i.e. , when .we see also that the coefficient can be made to take any value in ] into two intervals ] , take on and on , to set if one of is in and the other in , and otherwise , and to define to be some constant times , where the constant depends on how many of , and lie in .although our main focus in this paper was the introduction of the model , and the study of the existence and size of the giant component in this graph , we shall close by briefly discussing some connections to earlier work that arise when considering the local structure of .let us start by considering subgraph counts .as before , let consist of one representative of each isomorphism class of finite graphs , and let consist of the connected graphs in . given two graphs and , let be the number of homomorphisms from to , and the number of embeddings , so .writing for a graph with vertices , in the dense case , where has edges , one can combine the normalized subgraph or embedding counts to define a metric that turns out to have very nice properties .( often one uses the equivalent homomorphism densities , but when we come to sparse graphs embeddings are more natural than homomorphisms . )a sequence converges in this _ subgraph metric _ if and only if there are constants , , such that for each .lovsz and szegedy characterised the possible limits , both in terms of kernels and algebraically .borgs , chayes , lovsz , ss and vesztergombi introduced the cut metric that we used in section [ sec_giant ] .they showed that this metric is equivalent to the subgraph metric , as well as to various other notions of convergence for sequences of dense graphs .one of the nicest features of these results is that for every point in the completion of the space of finite graphs ( with respect to any of these metrics ) , there is a natural random graph model ( called a -random graph in ) that produces sequences of graphs tending to this point .( see also diaconis and janson , where connections to certain infinite random graphs are described . )turning to sparse graphs , as described in , the situation is much less simple .when has edges , as here , the natural normalization is to consider , for each connected , under suitable additional assumptions on the sequences , one can again combine these counts to define a metric , and consider the possible limit points .unfortunately , not much is known about these ; see the discussion in . turning to our present model , theorem [ th_mf ] shows that if is a kernel family with only finitely many non - zero kernels and all moments finite , then for all connected , where and is given by .this suggests the following question .note that question [ q1 ] is very different from the question answered by lovsz and szegedy : our definition of is different from the corresponding notion studied there , since it is adapted to the setting of sparse graphs .in particular , if consists only of a single kernel ( as in ) , then we have for any that is not a tree . as discussed in ( * ? ? ?* question 8.1 ) , it is an interesting question to ask whether , for various natural metrics on sparse graphs , one can provide natural random graph models corresponding to points in the completion . for those vectors where the answer to question [ q1 ] is yes , the model provides an affirmative answer ( at least if is bounded , say ) . but these points will presumably only be a very small subset of the possible limits , so there are many corresponding models still to be found . as noted in (* sections 3,7 ) , rather than considering subgraph counts , for graphs with edges it is more natural to consider directly the probability that the -neighbourhood of a random vertex is a certain graph ; the subgraph counts may be viewed as moments of these probabilities .more precisely , let be the set of isomorphism classes of connected , locally finite rooted graphs , and for , let be the set of isomorphism classes of finite connected rooted graphs with _ radius _ at most , i.e. , in which all vertices are within distance of the root .a probability distribution on naturally induces a probability distribution on each , obtained by taking a -random element of and deleting any vertices at distance more than from the root .given and a graph with vertices , let be the probability that a random vertex of has the property that its neighbourhoods up to distance form a graph isomorphic to , with as the root .a sequence with has _ local limit _ if for every and all .this notion has been introduced in several different contexts under different names : aldous and steele used the term ` local weak limit ' , and aldous and lyons the name ` random weak limit ' . also , benjamini and schramm defined a corresponding ` distributional limit ' of certain random graphs .notationally it is convenient to map a graph to the point ^{{{{\mathcal g}}^{\mathrm r}_t}}$ ] , and to define similarly . taking any metric on giving rise to the product topology, we obtain a metric on the set of graphs together with probability distributions on , and has local limit if and only if .as noted in , under suitable assumptions ( which will hold here if is bounded , for example ) , the two notions of convergence described above are equivalent , and one can pass from the limiting normalized subgraph counts to the distribution and _ vice versa_. also , if is a bounded kernel , then the random graphs defined in have as local limit a certain distribution associated to .this latter observation extends to the present model , and as we shall now see , no boundedness restriction is needed .given an integrable hyperkernel , let be the random ( potentially infinite ) rooted graph associated to the branching process .this is defined in the natural way : we take the root of as the root vertex , for each child clique of the root we take a complete graph in , with these cliques sharing only the root vertex .each child of the root then corresponds to a non - root vertex in one of these cliques , and we add further cliques meeting only in to correspond to the child cliques of , and so on . more generally , given an integrable kernel family , we may define a random rooted graph in an analogous way ; we omit the details .we write for the probability distribution on associated to .in fact , we conjecture that almost sure convergence holds for any coupling of the for different , and in particular if the different are taken to be independent .( the case of independent is the extreme case , which by standard arguments implies a.s. convergence for every other coupling too ; a.s . convergence in this case is known as _complete convergence_. ) writing for the probability distribution on induced by , by definition we have if and only if for each and each .the special case where is a bounded hyperkernel is essentially immediate : is simply a formal statement of the local coupling established for bounded hyperkernels in section [ sec_loc ] .exactly the same argument applies to a bounded kernel family . for the extension to general kernel familieswe need a couple of easy lemmas .this is an extension of proposition 8.11 of ; the proof carries over _ mutatis mutandis _ , using theorem [ th_ssbd ] with to bound the sum of the squares of the vertex degrees in the bounded case .the key step is to use edge integrability to find a bounded kernel family such that may be regarded as a subgraph of containing all but at most of the edges .it turns out that we can weaken edge integrability to integrability .the price we pay is that we can not control the number of edges incident to a small set of vertices , but only the size of the neighbourhood . as usual , given a set of vertices in a graph , we write for the set of vertices at graph distance at most from , so . replacing each atom by a clique , we may and shall assume that is a hyperkernel .let be the kernel family obtained from by replacing each clique by a star .since is integrable , is edge integrable .let be the function given by lemma [ le1 ] , and set .then whp every set of at most vertices of has and hence .coupling and in the obvious way , vertices adjacent in are at distance at most in , and the result follows .let be the number of atoms with vertices , and .let , and let be the contribution to from kernels corresponding to graphs with vertices , so and .given , there is an such that .each has a poisson distribution and is thus concentrated about its mean , so whp writing for , since was arbitrary we have shown that .since is bounded , it follows that .but , so .hence , so as claimed .let be an integrable kernel family , and let .fix , , and .it suffices to prove that then letting we have , so holds . since and are arbitrary , this implies .applying lemma [ le2 ] times , there is a such that whp any set of at most vertices of satisfies .since is integrable , there is a bounded kernel family which satisfies pointwise and . as , we have pointwise , and it follows that ; the argument is as for theorem [ tappc](i ) . taking large enough , we may thus assume that .let . since is bounded , we have . coupling and as usual so that , let be the set of vertices incident with an atom present in but not . by lemma [ badv ]we have whp , so whp no more than vertices are within distance of vertices in .but then whp , and follows .the general question of which probability distributions on arise as local limits of sequences of finite graphs seems to be rather difficult .there is a natural necessary condition noted in different forms in all of ; see also ( * ? ? ?* section 7 ) .aldous and lyons asked whether this condition is sufficient , emphasizing the importance of this open question .let us finish with a related but perhaps much simpler question : given , we defined as a branching process in which the particles have types .but in the corresponding random graph these types are not recorded .this means that can not simply be read out of the distribution of , i.e. , out of .this suggests the following question .part of this research was done during visits of sj to the university of cambridge and trinity college in 2007 , and to the isaac newton institute in cambridge , funded by a microsoft fellowship , in 2008 .d. aldous and j.m .steele , the objective method : probabilistic combinatorial optimization and local weak convergence , in _ probability on discrete structures _ , _ encyclopaedia math .sci . _ * 110 * , springer ( 2004 ) , pp .172 .a. d. barbour , l. holst and s. janson , _poisson approximation_. oxford university press , oxford , uk , 1992 .i. benjamini and o. schramm , recurrence of distributional limits of finite planar graphs , _ electron . j. probab ._ * 6 * ( 2001 ) , no .23 , 13 pp .( electronic ) .b. bollobs and o. riordan , metrics for sparse graphs , in _ surveys in combinatorics 2009 _ , london math . soc .lecture note series * 365 * , s. huczynska , j.d .mitchell and c.m.roney-dougal eds , cup ( 2009 ) , pp .212 - 287 .b. bollobs and o. riordan , sparse graphs : metrics and random models , preprint ( 2007 ) . `\simarxiv:0812.2656 . ` c. borgs , j. t. chayes , l. lovsz , v. t. ss and k. vesztergombi , convergent sequences of dense graphs i : subgraph frequencies , metric properties and testing , _ advances in math . _* 219 * ( 2008 ) , 18011851 .
in 2007 we introduced a general model of sparse random graphs with ( conditional ) independence between the edges . the aim of this paper is to present an extension of this model in which the edges are far from independent , and to prove several results about this extension . the basic idea is to construct the random graph by adding not only edges but also other small graphs . in other words , we first construct an inhomogeneous random hypergraph with ( conditionally ) independent hyperedges , and then replace each hyperedge by a ( perhaps complete ) graph . although flexible enough to produce graphs with significant dependence between edges , this model is nonetheless mathematically tractable . indeed , we find the critical point where a giant component emerges in full generality , in terms of the norm of a certain integral operator , and relate the size of the giant component to the survival probability of a certain ( non - poisson ) multi - type branching process . while our main focus is the phase transition , we also study the degree distribution and the numbers of small subgraphs . we illustrate the model with a simple special case that produces graphs with power - law degree sequences with a wide range of degree exponents and clustering coefficients .
nonlinear time series analysis generally assumes stationarity ( see * ? ? ?* ; * ? ? ?* for an overview ) .however , many time series are actually nonstationary for various reasons , such as temperature drift in the experimental setup , decreasing reservoirs in ( bio)chemical reactors or ecological systems , global warming in climate data , varying heart rate in cardiology , or vibrato / tremor in vocal fold vibrations .such nonstationarities can be modeled by underlying parameters , referred to as driving forces , that change the dynamics of the system smoothly on a slow time scale or abruptly but rarely , e.g. if the dynamics switches between different discrete states .if a test reveals that a time series is nonstationary , one can still apply methods of stationary time series analysis if one determines sections where the driving force has similar values and analyzes these sections as one stationary time series .this can be done by first slicing the time series into windows of equal size and then grouping the windows based on similarity measures of the dynamics .this division of the time series can be avoided by the technique of overembedding .if the embedding dimension is sufficiently high , then similar embedding vectors automatically belong to similar values of the driving forces and all available data can be used for analysis , such as forecasting or nonlinear noise reduction . however , in some cases , e.g. in the analysis of eeg data , one is particularly interested in revealing the driving forces themselves , which is in principle only possible up to an invertible transformation .one standard method for visualizing driving forces is the recurrence plot , but it is often difficult to interpret .methods have been developed to estimate driving forces based on the finding that the recurrence plot of a time series is similar to the recurrence plot of its underlying driving forces .another technique for the reconstruction of driving forces has been presented in . herei present an alternative approach based on slow feature analysis , a new technique developed in the field of theoretical neurobiology .slow feature analysis ( sfa ) has been originally developed in context of an abstract model of unsupervised learning of invariances in the visual system of vertebrates and is described in detail in .the general objective of sfa is to extract slowly varying features from a quickly varying signal . for a scalar output signalit can be formalized as follows .let be an -dimensional input signal where indicates time and ^t ] onto itself with a functional form `` '' for ( or ) .the parameter shifts this wedge cyclically to the left until it becomes a `` '' for .figure [ fig : smoothtent ] shows the true driving force , the time series , and the estimated driving force with , , and third order polynomials for sfa .the correlation between true and estimated driving force is .note that the scale and offset of the estimated driving force are arbitrarily fixed by the constraints and that the sign is random .the axes were therefore chosen such that the curves in the bottom graphs of this and the following figures are optimally aligned with each other . as a second example consider a time series derived from a logistic map which maps the interval ] and has the shape of an upside - down parabola crossing the abscissa at 0 and 1 .parameter governs the height of the parabola .figure [ fig : smoothlogistic ] shows the true driving force , the time series , and the estimated driving force with , , and second order polynomials for sfa .the correlation between true and estimated driving force is .[ [ choice - of - parameters ] ] choice of parameters + + + + + + + + + + + + + + + + + + + + sfa is basically parameter - free except for the general choice of the class of nonlinear basis functions .the only other parameters to choose in this method are the dimension and the time delay of the embedding vectors .the value of is a fairly reliable indicator for a good choice of values for and .the smaller the better , since there is no trivial way of achieving small -values ( except if the number of basis functions comes close to the number of data points ) .i typically test a certain range of -values and successively increase the -value until performance ( measured in terms of or ) is satisfactory .[ [ rarely - varying - driving - forces ] ] rarely varying driving forces + + + + + + + + + + + + + + + + + + + + + + + + + + + + + if sfa is designed to extract slowly varying features from a signal , how about driving forces that apparently violate this assumption heavily , e.g. if they vary abruptly and jump between different discrete values ?the definition of slowness given in ( [ eq : slowness ] ) does actually not depend on the smoothness of the output signal , it may equally well be a signal that switches abruptly between different discrete values as long as the jumps occur rarely enough to lead to the same variance of the time derivative .whether the order of the values can be recovered depends on the dynamics of the driving force .sfa will tend to order the values such that jumps occur between nearby values of the estimated driving force .thus , if the original values are -0.5 , 0 , and + 0.5 and there are many direct jumps between -0.5 and + 0.5 , sfa might map the three values onto -1 , + 1 , and 0 , respectively , thereby changing the order of the second and third value .figures [ fig : stepstent ] and [ fig : stepslogistic ] show results for the two systems if a rarely instead of a slowly and smoothly varying driving force is used .[ [ high - dimensional - input - data ] ] high - dimensional input data + + + + + + + + + + + + + + + + + + + + + + + + + + + probably the most severe limitation of sfa is the fact that the number of monomials ( or any other basis functions ) grows quickly with the dimensionality of the embedding vectors ( curse of dimensionality ) .one therefore might run out of computer memory before a high enough -value is reached .however , higher - dimensional problems could be dealt with in a hierarchical fashion by breaking the embedding vectors into smaller parts which are first analyzed separately and the results of which are then combined for a final analysis . applying this hierarchical scheme to 65-dimensional input vectors has been demonstrated in . [[ accuracy - of - the - estimated - driving - force ] ] accuracy of the estimated driving force + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + it might be surprising that in the examples above sfa is able to estimate the driving forces with such an accuracy up to a factor and a constant offset , even though the estimation is undetermined up to any invertible transformation , not only scaling and shift .one reason for that is that the driving forces used are relatively slow already and could probably not be improved much by an invertible nonlinear transformation .however , even if that were not the case one might hope that in practice for the lower - dimensional embedding vectors ( instead of , e.g. , ) sfa has to find a relatively simple input - output function , which is more likely to preserve the exact shape of the driving - force curve than a more complicated one .further experiments are needed to verify this .[ [ noise - sensitivity ] ] noise sensitivity + + + + + + + + + + + + + + + + + how sensitive is the method to noise ?one might suspect that sfa is quite sensitive to noise , since eigenvectors of smallest eigenvalues are used .however , focusing on small eigenvalues is only done after sphering and taking the time derivative , so that small but quickly varying noise components typically have large eigenvalues and are discarded by sfa . graceful degradation with noisewas also found in .thus , using the eigenvectors with smallest eigenvalues in itself does not induce noise sensitivity .i have done simulations with the examples presented here by adding gaussian white noise to the signals before embedding .adding 10% , 20% , and 50% noise to the tent map time series reduced the correlation between true and estimated driving force from to about 0.94 , 0.90 , and 0.71 , respectively ; adding 2% , 5% , and 10% noise to the logistic map time series reduced the correlation from to about 0.97 , 0.87 , and 0.70 , respectively .thus we see , that in these examples the method is in fact fairly robust with respect to noise .[ [ multiple - driving - forces ] ] multiple driving forces + + + + + + + + + + + + + + + + + + + + + + + sfa can be easily extended to the extraction of multidimensional output signals , which could be used to estimate multiple driving forces .however , this might require longer time series and higher - dimensional embedding vectors .if the different driving forces are not clearly separated by different time scales , sfa might only be able to estimate them up to a linear mixing transformation .in that case independent component analysis ( ica ) might be able to separate them , if they are statistically independent and not more than one is gaussian ( see * ? ? ?* for an overview of ica methods ) .in this paper i have demonstrated that slow feature analysis ( sfa ) can be applied to the problem of estimating a driving force of a nonstationary time series .the estimates are fairly accurate except for a scaling factor and a constant offset , which can not be extracted in general .the method works for slowly as well as rarely varying driving forces , although in the latter case the order of the different discrete values might not be recoverable .the slowness principle also provides robustness to noise , which is typically quickly varying and therefore suppressed by sfa as far as possible .there are still a number of open questions .the next steps of investigation will have to include a comparison with standard methods , such as the technique of recurrence plots , and an exploration of the conditions under which sfa can be applied with similar success as demonstrated here .it is also necessary to apply sfa to real world data .this has been successfully done in the context of learning receptive field properties of the visual cortex based on natural image sequences , but that was not for extracting driving forces . in any case , since sfa works very differently from other techniques that have been applied to the estimation of driving forces , one can hope that it at least complements these other techniques in some cases .i am grateful to hanspeter herzel and isao tokuda for useful hints and hanspeter herzel also for critically reading the manuscript .this work has been supported by the volkswagenstiftung .( 2003 ) . http://itb.biologie.hu-berlin.de/~wiskott/abstracts/berkwisk2003a.html[slow feature analysis yields a rich repertoire of complex - cell properties ] .cognitive sciences eprint archive ( cogprints ) 2804 , http://cogprints.ecs.soton.ac.uk / archive/00002804/. ( 1998 ) .http://itb.biologie.hu-berlin.de/~wiskott/abstracts/wis98a.html[learning invariance manifolds ] . in _ proc .5th joint symp . on neural computation , san diego _ , pages 196203 .university of california , san diego .http://itb.biologie.hu-berlin.de/~wiskott/abstracts/wissej2002.html[slow feature analysis : unsupervised learning of invariances ] .http://neco.mitpress.org/cgi/content/abstract/14/4/715[_neural computation _ , 14(4):715770 ] .
slow feature analysis ( sfa ) is a new technique for extracting slowly varying features from a quickly varying signal . it is shown here that sfa can be applied to nonstationary time series to estimate a single underlying driving force with high accuracy up to a constant offset and a factor . examples with a tent map and a logistic map illustrate the performance .
in most of the developed world neonatal care has been organized into networks of cooperating hospitals ( units ) in order to provide better and more efficient care for the local population . a neonatal or perinatal network in the uk offers all ranges of neonatal care referred to as intensive , high dependency and special care through level to level units .recent studies show that perinatal networks in the uk have been struggling with severe capacity crisis .expanding capacity by number of beds in the unit , in general , is not an option since neonatal care is an unusually expensive therapy .reducing capacity is not an option either , as this would risk sick neonates being denied admission to the unit or released prematurely .consequently , determining cot capacity has become a major concern for perinatal network managers in the uk .queueing models having zero buffer also referred to as ` loss models ' have been widely applied in hospital systems and intensive care in particular ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ? proposed an m / m / c/0 loss model for capacity management in an operating theatre - intensive care unit . developed an overflow model with loss framework for capacity planning in intensive care units while developed a loss network model for a neonatal unit , and extended the model framework to a perinatal network in .these models assume that inter - arrival times and length of stay follow exponential distributions .queueing models with exponential inter - arrival and service times are easiest to study , since such processes are markov chains .however , length of stay distribution in intensive care may be highly skewed .performance measures of a queueing system with non - zero buffer are insensitive to service time distribution provided that the arrival process is poisson .this insensitivity property is , in general , no longer valid in the case of zero buffer or loss systems .many approaches have been found towards generalizing such processes since erlang introduced the m / m / c/0 model for a simple telephone network and derived the well - known loss formula that carries his name in 1917 . considered the loss system with general arrival pattern ( gi / m / c/0 ) through laplace transform .nowadays there has been a growing interest in loss systems where both arrival and service patterns are generalized ( gi / g / c/0 ) .the theoretical investigation of the gi / g / c/0 loss model through the theory of random point processes has attracted many researchers . gave a method for approximating the gi / gi / c/0 queue by means of the gi / gi/ queue , while applied a similar approximation under heavy traffic . examined the continuity property of the model , and established an equivalence between arrival and departure probability . gave an approximation method for the batch - arrival gi}$]/g / c / n queue which is applicable when the traffic intensity is less than one .the m / g / c / n and the gi / g / c / n queue have also been studied widely ; for a comparison of methods , see .although many studies have been found in the literature , no simple expression for the steady state distribution is available for a gi / g / c/0 system . provided the exact solution for the gi / gi / c/0 system expressing the inter - arrival and service time by matrix exponential distribution .the method is computationally intensive and often includes imaginary components in the expression ( which are unrealistic ) .diffusion approximations , which require complicated laplace transforms have also been used for analyzing gi / g / c / n queues ( e.g. , * ? ? ?* ; * ? ? ? derived a transform - free expression for the analysis of the gi / g/1/n queue through the decomposed little s formula .a two - moment approximation was proposed to estimate the steady state queue length distribution . using the same approximation , extended the system for the multi - server finite buffer queue based on the system equations derived by . developed a heuristic approach for the numerical analysis of gi / g / c/0 queueing systems with examples of the two - phase coxian distribution . in this paperwe derive a generalized loss network model with overflow for a network of neonatal hospitals extending the results obtained by .since some model parameters can not be computed practically , a two - moment based approximation method is applied for the steady state analysis as proposed by .the model is then applied to the north central london perinatal network , one of the busiest network in the uk .data obtained from each hospital ( neonatal unit ) of the network have been used to check the performance of the model .the rest of the paper is organized as follows : in the next section we first discuss a typical perinatal network and then develop a generalized loss model with overflow for the network . the steady state distribution and expression for rejection and overflow probabilitieshave been derived for each level of care of the neonatal units .application of the model and numerical results are presented in section [ section4 ] .a perinatal network in the uk is organized through level , level and level units .figure [ fig1 ] shows a typical perinatal network in the uk .level units consist of a special care baby unit ( scbu ) .it provides only special care which is the least intensive and most common type of care . in these units, neonates may be fed through a tube , supplied with extra oxygen or treated with ultraviolet light for jaundice .figure [ fig2 ] shows the typical patient flow in a level unit .a level unit may also have an intensive therapy unit ( itu ) which provides short - term intensive care to neonates , and the unit may then be referred to as ` level unit with itu ' .figure [ fig3 ] shows the structure of a level unit with itu .level units consist of a scbu and a hdu where neonates can receive high dependency care such as breathing via continuous positive airway pressure or intravenous feeding .these units may also provide short - term intensive care .a level unit provides all ranges of neonatal care and consists of an scbu , an hdu and an nicu where neonates will often be on a ventilator and need constant care to be kept alive . level and level units may also have some transitional care ( tc ) cots , which may be used to tackle overflow and rejection from scbu .although level and level units have similar structures level units might not have sufficient clinician support for the nicu . nicu are hdu are often merged in level and level units for higher utilization of cots . in level or level units ,nicu - hdu neonates are sometimes initially cared at scbu when all nicu cots are occupied .similarly scbu neonates are cared at nicu - hdu or tc , depending upon the availability of cots , staff and circumstances .this temporary care is provided by staffing a cot with appropriate nurse and equipment resources , and will be referred to as ` overflow ' .rejection occurs only when all cots are occupied ; in such cases neonates are transferred to another neonatal unit .patient flows in a typical level or level unit are depicted in figure [ fig4 ] . unlike for level /level units, overflow does not occur in level units with itu . the underlying admission , discharge and transfer policies of a perinatal network are described below . 1 .all mothers expecting birth week of gestational age or all neonates with week of gestational age are transferred to a level unit .mothers expecting birth but week of gestational age or all neonates of the same gestational age are transferred to a level unit depending upon the booked place of delivery .all neonatal units accept neonates for special care booked at the same unit .neonates admitted into units other than their booked place of delivery are transferred back to their respective neonatal unit receiving after the required level of care .now we shall develop a generalized loss network framework for a perinatal network with level , level and level units . to obtain the steady state behavior of the network ,we first decompose the whole network into a set of subnetworks ( i.e. , each neonatal units ) due to higher dimensionality , then we derive the steady state solution and expression of rejection probability for each of the units . when analyzing a particular sub - network in isolation , back transfers are combined with new arrivals to specifically take into account the dependencies between units .cot capacity for the neonatal units may be determined based on the rejection probabilities at each level of care and overflow to temporary care of the units .a level unit consists typically of a scbu . therefore , assuming no waiting space and first come first served ( fcfs ) discipline , a level unit can be modelled as a gi / g / c/0 system .let the inter - arrival times and length of stay of neonates be i.i.d .random variables denoted by and , respectively .also the length of stay is independent of the arrival process .define let denotes the number of neonates in the system at an arbitrary time , denotes the number of neonates ( arriving ) who find the system is in steady state with neonates , and denotes the number of neonates discharged from the system in steady state with neonates .let be the number of cots at the scbu . for ,let and where is the remaining inter - arrival time at the discharge instant of a neonate who leaves behind neonates in the systems , ( ) is the remaining length of stay of a randomly chosen occupied cot at the arrival ( discharge ) instant of a neonate who finds ( leaves behind ) neonates in the system .let and be , respectively , the mean inter - arrival time and the mean length of stay under the condition that the system started at the arrival instant of a neonate when there were neonates in the system . clearly , from the definitions , we obtain we set for convenience .then the first set of system equations obtained by for a gi / g / c/0 loss system can be written as the second set of system equations can be given by and from the first set of system equations for the gi / g / c/0 queue , the following equations can be derived and from the second set of system equations for the gi / g / c/0 queue , the following equations can be derived ,\;\ ; 1\leq n\leq c-1 , \label{eq7.3}\end{gathered}\ ] ] and .\label{eq7.4}\ ] ] [ th01 ] the steady state distribution for a gi / g / c/0 system is given by and where and ,\;\ ; 1\leq i\leq c. \end{array } \right .\end{array } \right\}\ ] ] the steady state distribution can be obtained by solving the above two sets of system equations .first , we equate equations ( [ eq7.1 ] ) and ( [ eq7.2 ] ) with equations ( [ eq7.3 ] ) and ( [ eq7.4 ] ) for each , . then using the following well - known rate conservation principle , we solve them simultaneously , we obtain equation ( [ eq7.5 ] ) . in steady state analysis of a gi / g / c/0 system , equations in ( [ eq7.6 ] ) involve quantities , and , which are not easy to compute in general , except for some special cases such as poisson arrival or exponential length of stay .therefore , a two moment approximation is used as proposed by and for the steady state distribution of the gi / g / c/0 system based on the exact results as derived in equations [ eq7.5 ] and [ eq7.6 ] . to obtain the approximation, we replace the inter - arrival and length of stay average quantities , and by their corresponding time - average quantities ; where is the squared coefficient of variation of inter - arrival times ( length of stay ) . using equations ( [ eq7.7 ] ) and ( [ eq7.8 ] ) in equation ( [ eq7.5 ] ) , we obtain the two moment approximation for the steady state distribution and where and ,\;\ ; 1\leq i\leq c-1,\smallskip\\ \lambda\big[m_a+\big(m_a - q_a\big)\tilde{\mu}_i/\tilde{\lambda}_{i-1}\big],\;\ ; i = c .\end{array } \right .\end{array } \right\}\ ] ] therefore , the rejection probability for a level unit is computed as in a level unit with itu ( figure [ fig3 ] ) , overflow from itu to scbu does not occur .the unit can be modelled as two joint gi / g / c/0 systems . therefore , extending the theorem [ th01 ] , the steady state distribution for a level neonatal unit with itu is given by and where the approximate steady state distribution for a level neonatal unit with itu is given by and where and , , , and are defined by equations in ( [ eq7.10 ] ) for nicu - hdu and scbu - tc , respectively .the rejection probability at the level of care is calculated as where and .we derive the mathematical model for a level 3/level 2 neonatal unit as described in section [ section2 ] and showing in figure [ fig4 ] .let , and be the number of cots at nicu - hdu , scbu and tc , respectively .let be the number of neonates at unit , and be the number of neonates overflowing from unit to unit , at time .then the vector process is a continuous - time discrete - valued stochastic process .we assume the process is time homogeneous , aperiodic and irreducible on its finite state space .the process does not necessarily need to hold the markov property .the state space is given by where , is the number of neonates at the main unit , and , is the number of neonates at the overflow unit from the unit .now the system can be modelled as two joint loss queueing processes with overflow .assume that the joint gi / g / c/0 systems are in steady state .we shall now derive the expression for the steady state distribution for a level /level neonatal unit . extending the theorem [ th01 ] for two joint gi / g / c/0 systems , the steady state distribution for a level or level neonatal unit with overflowscan be derived .[ th1 ] the steady state distribution for a level or level unit can be given by and where , , , , are arrival and departure related quantities for nicu - hdu and scbu - tc , respectively , defined by equations in ( [ eq7.6 ] ) , and is the normalizing constant .the approximate steady state distribution for a level /level neonatal unit is given by and where , , , and are defined by equations in ( [ eq7.10 ] ) for nicu - hdu and scbu - tc , respectively , and the rejection probability at the level of care for a level /level neonatal unit is computed as where and the overflow probability at the level of care for a level /level unit can also be computed from equation ( [ eq7.12 ] ) substituting by , + where and [ th2 ] the approximate steady state distribution for a level or level neonatal unit is exact for exponential inter - arrival time and length of stay distributions at each level of care . in the case of exponential inter - arrival time andlength of stay distributions , arrival and departure related parameters reduce to the corresponding mean values of inter - arrival and length of stay and then the steady state solution becomes hence we obtain where which is the steady state solution for a level unit as in for markovian arrival and discharge patterns .adding back transfers , we can easily obtain the steady state distribution for a level unit .we apply the model to the case of a perinatal network in london which is the north central london perinatal network ( nclpn ) .the network consists of five neonatal units : uclh ( level ) , barnet ( level ) , whittington ( level ) , royal free ( level with itu ) and chase farm ( level ) .the underlying aim of the network is to achieve capacity so that 95% women and neonates may be cared for within the network .data on admission and length of stay were provided by each of the units . since the data did not contain the actual arrival rate and the rejection probability for the units we estimated the actual arrival rates using simul8^^ , a computer simulation package designed to model and measure performances of a stochastic service system .table [ tab1 ] presents mean length of stay and estimated mean inter - arrival times for each level of care at uclh , barnet , whittington , royal free and chase farm neonatal units for the year .then we also use simulation ( simul8 ) to estimate the rejection probabilities for each level of care of the units for various arrival and discharge patterns .we refer to these estimates as ` observed ' rejection probabilities . in this section rejection probabilitiesare estimated for all five units in the nclpn through the application of the model formulae in section [ section3 ] .an extensive numerical investigation has been carried out for a variety of inter - arrival and length of stay distributions to test the performance of the model and the approximation method .table [ tab2 ] compares the ` observed ' and estimated rejection probabilities at each level of care for uclh , barnet , whittington , royal free and chase farm neonatal units for various combinations of inter - arrival time and length of stay distributions .namely , exponential ( m ) , two - phase hyper - exponential ( h ) and two - phase erlang ( e ) distributions are considered . to compare ` observed ' rejection probabilities with estimated rejection probabilities when one of these probabilities are or more , we define ` absolute percentage error ' ( ape ) as the absolute deviation between ` observed ' and estimated rejection probability divided by ` observed ' rejection probability and then multiplied by 100 .rejection probabilities below are normally considered statisfactor . for this reasonwe have not reported the ape when both ` observed ' and estimated rejection probabilities are less than . the ` observed ' and estimated rejection probabilities are close for the uclh unit . at nicu - hdu ,the highest ` observed ' rejection probability is occurred for e/e/c/0 , and the estimated rejected probability is also highest for the same arrival and discharge patterns with an absolute percentage error ( ape ) . the lowest ` observed ' rejection probability is for the h/e/c/0 while the estimated rejection probability is with an ape . at scbu for e/m / c/0 ,the ` observed ' and estimated rejection probabilities are and , respectively , with an ape . at barnet nicu - hdu , the ` observed ' and estimated rejection probabilities are close with a varying apes from .for barnet scbu the ` observed ' and estimated rejection probabilities are all less than and relatively close to each other . both the uclh nicu - hdu and scbu and barnet nicu - hdu would require additional cots to keep the rejection level low and achieve a target .rejection probabilities from both nicu - hdu and scbu at the whittington neonatal unit are below regardless of the combination of inter - arrival time and length of stay distributions , which indicates that the neonatal unit is performaing well with 12 nicu , 16 scbu and 5 tc cots .the ` observed ' and estimated rejection probabilities at royal free itu and scbu and chase farm scbu are close to each other .the results in table [ tab2 ] suggest that royal free itu and scbu and chase farm scbu require extra cots to decrease the rejection level . through our extensive numerical investigationswe observe that the rejection probability often varies greatly according to arrival and discharge patterns .the number of cots required will also vary depending upon arrival and discharge patterns .therefore , one should take into account the actual arrival and discharge patterns for accurate capacity planning of neonatal units rather than approximating by markovian arrival and discharge patterns . to achieve a ` 95% ' admission acceptance target uclh nicu - hdu and scbu, barnet nicu - hdu , royal free itu and scbu , and chase farm scbu need to increase their number of cots .we have also observed that performance of the proposed generalized capacity planning model improves as the squared coefficient of variation values of inter - arrival and length of stay get closer to ( recall that our approximation is exact for the markovian inter - arrival and length of stay case in which squared coefficient of variation values of inter - arrival and length of stay are both ) and as gets larger ( i.e. , under heavy traffic ) .a possible explanation is that as gets larger , the period during which all the cots are busy tends to get longer .as such a busy period gets longer , arrival and departure points of arrivals tend to become more and more like arbitrary points in time . as such, the approximation is likely to get more accurate .planning capacity accurately has been an important issue in the neonatal sector because of the high cost of care , in particular .markovian arrival and length of stay can provide only approximate estimates which may often underestimate or overestimate the required capacity .the underestimation of cots may increase the rejection level , which in turn may be life - threatening or cause expensive transfers for high risk neonates , hence increase risk for vulnerable babies . on the other hand , overestimation may cause under - utilization of cots , and potential waste of resources . in this paper a generalized framework for determining cot capacity of a perinatal network was derived .after decomposing the whole network into neonatal units , each unit was analyzed separately .expressions for the stationary distribution and for rejection probabilities were derived for each neonatal unit .an approximation method was suggested to obtain the steady state rejection probabilities .the model formulation was then applied to the neonatal units in the nclpn .a variety of inter - arrival and length of stay distributions in the neonatal units has been considered for numerical experimentation .the ` observed ' and estimated rejection probabilities were close ( ape typically less than 20% ) for all hospital units when rejection probabilities were or more . when ` observed ' rejection probabilities were less than , as for the barnet scbu and both the whittington nicu - hdu and scbu , the ape increased rapidly to beyond 50%. however , since these values are less than or close to 0.05 , they do not have an impact on management decisions regarding the number of cots .in contrast , when ` observed ' rejection probabilities are high , then the estimated values become close to each other .the ` observed ' and estimated rejection probabilities were , in general , close for high traffic intensities .as traffic intensity drops the absolute percent error increases quickly . in most cases ,the absolute percent error becomes small for markovian arrival and length of stay patterns .we know that service time distribution is insensitive for delay systems if the arrival process is poisson .however , the property is no longer valid for loss systems .the model results as seen in table [ tab2 ] also confirm this sensitivity property .the main advantage of the model framework is that arrival and discharge pattern do not need to hold the markov property .the model is based on the first two moments and requires no distributional assumption .this two - moment approximation techniques performs reasonably well in terms of accuracy ( ape ) and is fast .the method is exactly markovian for equal mean and variance .the numerical results show that the model can be used as a capacity planning tool for perinatal networks for non - markovian arrival and discharge patterns as well as markovian patterns .if good estimates of the first two moments are available , then the generalized model can be used to determine the required cot capacity in a perinatal network for given level of rejection probabilities .although we applied the model framework in the hospital case the model formulation can also be applied to plan capacity for other areas such as computer , teletraffic and other communication networks .asaduzzaman , m. , t. j. chaussalet , s. adeyemi , s. chahed , j. hawdon , d. wood , n. j. robertson .2011 . towards effective capacity planning in a perinatal network centre ._ archives of disease in childhood _ * 95 * f283f287 . national audit office .2007 . caring for vulnerable babies :the reorganisation of neonatal services in england , retrieved april 20 , 2011 , http://www.nao.org.uk/publications/0708/caring_for _ vulnerable_babies.aspx .
we develop a generalized loss network framework for capacity planning of a perinatal network in the uk . decomposing the network by hospitals , each unit is analyzed with a gi / g / c/0 overflow loss network model . a two - moment approximation is performed to obtain the steady state solution of the gi / g / c/0 loss systems , and expressions for rejection probability and overflow probability have been derived . using the model framework , the number of required cots can be estimated based on the rejection probability at each level of care of the neonatal units in a network . the generalization ensures that the model can be applied to any perinatal network for renewal arrival and discharge processes . * a generalized loss network model with overflow for capacity planning of a perinatal network * md asaduzzaman + institute of statistical research and training ( isrt ) , university of dhaka + dhaka 1000 , bangladesh , e - mail : asad.ac.bd thierry j chaussalet + department of business information systems , school of electronics and computer science + university of westminster , 115 new cavendish street , london w1w 6uw , uk + e - mail : chausst.ac.uk
prepositional phrase ( pp ) attachment disambiguation is an important problem in nlp , for it often gives rise to incorrect parse trees . statistical parsers often predict incorrect attachment for prepositional phrases . for applications like machine translation , incorrectpp - attachment leads to serious errors in translation .several approaches have been proposed to solve this problem .we attempt to tackle this problem for english .english is a syntactically ambiguous language with respect to pp attachments .for example , consider the following sentence where the prepositional phrase _ with pockets _ may attach either to the verb _ washed _ or to the noun _jeans_. * sentence : * i washed the jeans with pockets . below is the correct dependency parse tree ( for sentence ) where the prepositional phrase _ with pockets _ is attached to the noun _jeans_. ( 0,0 ) node[circle , inner sep=0.8pt , fill = black , label = below: ( a ) ; ( 1,0 ) node[circle , inner sep=0.8pt , fill = black , label = below: ( b ) ; ( 2.3,0 ) node[circle , inner sep=0.8pt , fill = black , label = below: ( c ) ; ( 3.3,0 ) node[circle , inner sep=0.8pt , fill = black , label = below: ( d ) ; ( 4.5,0 ) node[circle , inner sep=0.8pt , fill = black , label = below: ( e ) ; ( 5.8,0 ) node[circle , inner sep=0.8pt , fill = black , label = below: ( f ) ; \(a ) to [ bend left=45 ] ( b ) ; ( b ) to [ bend left=45 ] ( d ) ; ( c ) to [ bend left=45 ] ( d ) ; ( d ) to [ bend left=45 ] ( e ) ; ( e ) to [ bend left=45 ] ( f ) ; [ dep - parse1 ] another possible parse tree for the same sentence could be as shown in figure [ fig : m2 ] : ( 0,0 ) node[circle , inner sep=0.8pt , fill = black , label = below: ( a ) ; ( 1,0 ) node[circle , inner sep=0.8pt , fill = black , label = below: ( b ) ; ( 2.3,0 ) node[circle , inner sep=0.8pt , fill = black , label = below: ( c ) ; ( 3.3,0 ) node[circle , inner sep=0.8pt , fill = black , label = below: ( d ) ; ( 4.5,0 ) node[circle , inner sep=0.8pt , fill = black , label = below: ( e ) ; ( 5.8,0 ) node[circle , inner sep=0.8pt , fill = black , label = below: ( f ) ; \(a ) to [ bend left=45 ] ( b ) ; ( b ) to [ bend left=45 ] ( d ) ; ( c ) to [ bend left=45 ] ( d ) ; ( b ) to [ bend left=45 ] ( e ) ; ( e ) to [ bend left=45 ] ( f ) ; a statistical parser often predicts the pp - attachment incorrectly , and may lead to incorrect parse trees .let us now look at another sentence .* sentence : * i washed the jeans with soap .the correct dependency tree for sentence ] * for * + $ ] * end for * * if * then return * else * * end for * the lagrangian multipliers are initialized to zero .the best tree in the target language is predicted by the argmax computation in step 4 .this maximization involves the parser model parameters and the score of the best projected path in the source tree for all edges . denotes the score of the projected path of the edge on the source tree t. in steps 6 and 7 , the best projected path for every edge of the source tree is predicted on the target tree using the classifiers described in section [ projected path prediction ] .the constraints here are that the edges in the projected paths from the classifiers and the predicted trees are in agreement . in order to predict the projected path in one language for an edge in the other language, we use a set of two classifiers in a pipeline .let us recall that we have two nodes in one language with an edge between them , and we are trying to predict the path of the corresponding aligned nodes in the other language . the first classifier predicts the length of the projected path , and the second predicts the predicted path itself , given the path length from the first classifier .let us look at these classifiers separately .the classifier for path length prediction is a set of five binary classifiers , which predict the path length to be 1 , 2 , 3 , 4 or 5 .we assume projected path lengths to be no greater than 5 .these classifiers are _ perceptrons _ trained on separate annotated data .the features used were the words and pos tags of the four nodes in the pair of alignments under consideration .the classifier for path prediction is a set of four _ structured perceptron_ classifiers .we train four classifiers to predict the paths of length 2 , 3 , 4 and 5 .these set of classifiers were trained on separate annotated data , and the features used were the same as in the set of classifiers for path length prediction .a parser model was trained for hindi using the mstparser by a part of the the hindi dependency treebank data ( 18708 sentences ) from iiit - hyderabad .a part of the penn treebank ( 28188 sentences ) was used for training an english parser .the treebanks were converted to mstparser format from conll format for training .a part of the ilci english - hindi tourism parallel corpus ( 1500 sentences ) was used for training the classifiers .this corpus was pos - tagged using the stanford pos tagger for english and using the hindi pos tagger from iiit - hyderabad for hindi .it was then automatically annotated with dependency parse trees by the parsers we had trained before english and hindi . for testing , we created a corpus of 100 parallel sentences and their word alignments from the hindi - english tourism parallel corpus .we manually annotated the instances of pp - attachment ambiguity .we examine the prediction for attachment of only these cases .the baseline system used is the attachment predicted by the parser models trained using the mstparser .we ran experiments on the test set for iterations 10 to 60 , in steps of 10 .the outputs from the mstparser trained model and the dd algorithm were compared against the gold data for english .our observations have been tabulated in table [ results ] .the mstparser model was able to correctly disambiguate 54 number of pp - attachments .our algorithm , however , performed better and marked 64 number of attachments correctly , in the best case .the baseline accuracy for pp attachment was 54% . with our approach, we were able to achieve an improvement of 10% over the baseline .[ results ] * * .test results for english - hindi [ cols="^ " , ] & 54 & 64 + accuracy ( % ) & 54 & 64 + we also experimented with the number of iterations to see if the attachment predictions got any better .the observations have been plotted in the graph in figure [ itr ] .our algorithm performed best at 30 iterations . in the event of lack of gold standard data for our experiments ,we have used statistical pos taggers for pos tagging the data .also , for getting word alignments , we have used giza++ , which again has scope for errors . these kind of errors may cascade and cause our system to underperform .we were able to achieve an accuracy of 10% over the baseline using our approach .however , in terms of overall dependency parsing and not just with respect to pp - attachment , our system is unable to beat the mstparser model .however , we need to test our approach on a larger dataset , and across other domains besides tourism . besides hindi, there is also scope for exploring other languages as an aid for pp - attachment disambiguation in english .our approach could also be used for wh - clause attachment . since incorrectpp - attachment has a direct consequence on machine translation , one interesting analysis could be to use pp - attachments from our system and check for improvement in the quality of translation .
in this paper , we attempt to solve the problem of prepositional phrase ( pp ) attachments in english . the motivation for the work comes from nlp applications like machine translation , for which , getting the correct attachment of prepositions is very crucial . the idea is to correct the pp - attachments for a sentence with the help of alignments from parallel data in another language . the novelty of our work lies in the formulation of the problem into a dual decomposition based algorithm that enforces agreement between the parse trees from two languages as a constraint . experiments were performed on the english - hindi language pair and the performance improved by 10% over the baseline , where the baseline is the attachment predicted by the mstparser model trained for english .
subscriber trajectory datasets collected by network operators are logs of timestamped , georeferenced events associated to the communication activities of individuals .the analysis of these datasets allows inferring _ fine - grained _ information about the movements , habits and undertakings of vast user populations .this has many different applications , encompassing both business and research .for instance , trajectory data can be used to devise novel data - driven network optimization techniques or support content delivery operations at the network edge .they can also be monetized via added - value services such as transport analytics or location - based marketing .additionally , the relevance of massive movement data from mobile subscribers is critical in research disciplines such as physics , sociology or epidemiology .the importance of trajectory data has also been recognized in the design of future 5 g networks , with a thrust towards the introduction of data interfaces among network operators and over - the - top ( ott ) providers to give them online access to this ( and other ) data .otts can leverage such interfaces to automatically retrieve the data and process them on the fly , thus enabling new applications such as intelligent transportation or assisted - life services .all these use cases stem from the disclosure of trajectory datasets to third parties . however , the open release of such data is still largely withhold , which hinders potential usages and applications .a major barrier in this sense are privacy concerns : data circulation exposes it to re - identification attacks , and cognition of the movement patterns of de - anonymized individuals may reveal sensitive information about them .this calls for anonymization techniques .the common practice operators adhere to is replacing personal identifiers ( e.g. , name , phone number , imsi ) with pseudo - identifiers ( i.e. , random or non - reversible hash values ) . whether this is a sufficient measureis often called into question , especially in relation to the possibility of tracking user movements .what is sure is that pseudo - identifiers have been repeatedly proven not to protect against user trajectory uniqueness , i.e. , the fact that mobile subscribers have distinctive travel patterns that make them univocally recognizable even in very large populations .uniqueness is not a privacy threat per - se , but it is a vulnerability that can lead to re - identification .examples are brought forth by recent attempts at cross - correlating mobile operator - collected trajectories with georeferenced check - ins of flickr and twitter users , with credit card records or with yelp , google places and facebook metadata .more dependable anonymization solutions are needed .however , the strategies devised to date for relational databases , location - based services , or regularly sampled ( e.g. , gps ) mobility do not suit the irregular sampling , time sparsity , and long duration of trajectories collected by mobile operators .moreover , current privacy criteria , including -anonymity and differential privacy , do not provide sufficient protection or are impractical in this context .see sec.[sec : related ] for a detailed discussion . in this paper , we put forward several contributions towards _ privacy - preserving data publishing ( ppdp ) _ of mobile subscriber trajectories .our contributions are as follows : _we outline attacks that are especially relevant to datasets of spatiotemporal trajectories ; _ ( ii ) _we introduce -anonymity , a novel privacy criterion that effectively copes with the most threatening attacks above ; _ ( iii ) _ we develop ` k - merge ` , an algorithm that solves a fundamental problem in the anonymization of spatiotemporal trajectories , i.e. , effective generalization ; _ ( iv ) _we implement ` kte - hide ` , a practical solution based on ` k - merge`that attains -anonymityin spatiotemporal trajectory data ; _ ( v ) _we evaluate our approach on real - world datasets , showing that it achieves its objectives while retaining a substantial level of accuracy in the anonymized data .we first present the requirements of ppdp , in sec.[sub : ppdp ] , and formalize the specific attacker model we consider , in sec.[sub : att ] .we then propose a consistent privacy model , in sec.[sub : priv ] .ppdp is defined as the development of methods for the publication of information that allows meaningful knowledge discovery , and yet preserves the privacy of monitored subjects .the requisites of ppdp are similar for all types of databases , including our specific case , i.e. , datasets of spatiotemporal trajectories .they are as follows . * _ the non - expert data publisher ._ mining of the data is performed by the data recipient , and not by the data publisher .the only task of the data publisher is to anonymize the data for publication . * _ publication of data , and not of data mining results_. the aim of ppdp is producing privacy - preserving datasets , and not anonymized datasets of classifiers , association rules , or aggregate statistics .this sets ppdp apart from privacy - preserving data mining ( ppdm ) , where the final usage of the data is known at dataset compilation time .* _ truthfulness at the record level_. each record of the published database must correspond to a real - world subject .moreover , all information on a subject must map to actual activities or features of the subject .this avoids that fictitious data introduces unpredictable biases in the anonymized datasets .our privacy model will obey the principles above .we stress that they impose that the privacy model must be agnostic of data usage ( points 1 and 2 ) , and that it can not rely on randomized , perturbed , permuted and synthetic data ( point 3 ) . unlike ppdp requirements , the attacker model is necessarily specific to the type of data we consider , and it is characterized by the _ knowledge _ and _ goal _ of the adversary .the former describes the information the opponent possesses , while the latter represents his privacy - threatening objective . in trajectory datasets ,each data record is a sequence of spatiotemporal samples .we assume an attacker who can track a target subscriber continuously during any amount of time .the adversary knowledge consists then in all spatiotemporal samples in the victim s trajectory over a continuous that covers all disjoint tracking intervals . ]time interval of duration .attacks against user privacy in published data can have different objectives , and a comprehensive classification is provided in .two classes of attacks are especially relevant in the context of mobile subscriber trajectory data .both exploit the uniqueness of movement patterns that , as mentioned in sec.[sec : intro ] , characterizes trajectory data . * _ record linkage attacks ._ these attacks aim at univocally distinguishing an individual in the database .a successful record linkage enables cross - database correlation , which may ultimately unveil the identity of the user .record linkage attacks on mobile traffic data have been repeatedly and successfully demonstrated . as mentioned in sec.[sec : intro ] , they have also been used for subsequent cross - database correlations . * _ probabilistic attacks . _these attacks let an adversary with partial information about an individual enlarge his knowledge on that individual by accessing the database .they are especially relevant to spatiotemporal trajectories , as shown by seminal works that first unveiled the anonymization issues of mobile traffic datasets .let us imagine a scenario where an adversary knows a small set of spatiotemporal points in the trajectory of a subscriber ( because , e.g. , he met the target individual there ) .a successful probabilistic attack would reveal the complete movements of the subscriber to the attacker , who could then use them to infer sensitive information about the victim , such as home / work locations , daily routines , or visits to healthcare structures .our privacy model will address both classes of attacks above , led by an adversary with knowledge described in sec.[sub : knowledge ] .our privacy model is designed following the ppdp requirements and attacker model presented before .we start by considering suitable privacy criteria against record linkage and probabilistic attacks , in sec.[sub : ka ] and sec.[sub : uninf ] , respectively .we then show how the first criterion is in fact a specialization of the second , in sec.[sub : relate ] , which allows us to focus on a single unifying privacy model . finally , we present the elementary techniques that we employ to implement the target privacy criterion , in sec.[sub : tools ] .the _ -anonymity _criterion realizes the _ indistinguishability principle _ , by commending that each record in a database must be indistinguishable from at least other records in the same database . in our case , this maps to ensuring that each subscriber is hidden in a crowd of users whose trajectories can not be told apart .the popularity of -anonymity for ppdp has led to indiscriminated use beyond its scope , and subsequent controversy on the privacy guarantees it can provide .e.g. , -anonymity has been proven ineffective againt attacks aiming at attribute linkage ( including exploits of insufficient side - information diversity ) , at localizing users , or at disclosing their presence and meetings .however , -anonymity remains a legitimate criterion against record linkage attacks on any kind of database .therefore , this privacy model protects trajectory data from the first type of attack in sec.[sub : att ] , including its variations in . no privacy criterion proposedto date can safeguard spatiotemporal trajectory data from the second type of attacks in sec.[sub : att ] , i.e. , probabilistic attacks .this forces us to define an original criterion , as follows .the pertinent principle here is the so - called _ uninformative principle _ ,i.e. , ensuring that the difference between the knowledge of the adversary before and after accessing a database is small . in our context, this principle warrants that an attacker who knows some subset of a subscriber s movements can not extract from the dataset a substantially longer portion of that user s trajectory . -anonymityof user , with =2 . ]to attain the uninformative principle , we introduce the _ -anonymity _ privacy criterion .-anonymitycan be seen as a variation of _-anonymity _ , which establishes that each individual in a dataset must be indistinguishable from at least other users in the same dataset , when limiting the attacker knowledge to any set of attributes .-anonymity tailors -anonymity to our scenario , as follows . * as per sec.[sub : att ] , the attacker knowledge can be any continued sequence of spatiotemporal samples covering a time interval of length at most : thus , the parameter of -anonymity maps to the ( variable ) set of samples contained in any time period . during any such time period , every trajectory in the dataset must be indistinguishable from at least other trajectories .* the maximum additional knowledge that the attacker is allowed to learn is called _ leakage _ ; it consists of the spatiotemporal samples of the target user s trajectory contained in a time interval of duration at most , disjoint from the original . in order to fulfill the uninformative principle, the leakage must be small .the two requirements above imply alternating in time the trajectories that provide anonymization .an intuitive example is provided in fig.[fig : km ] . there , the trajectory of a target user is -anonymized using those of five other subscribers .the overlapping between the trajectories of , , , , and that of is partial and varied .an adversary knowing a sub - trajectory of during any time interval of duration always finds at least one other user with a movement pattern that is identical to that of during that interval , but different elsewhere . with this knowledge ,the adversary can not tell apart from the other subscriber , and thus can not attribute full trajectories to one user or the other .as this holds no matter where the knowledge interval is shifted to , the attacker can never retrieve the complete movement patterns of : this achieves the uninformative principle .still , the adversary can increase its knowledge in some cases .let us consider the interval indicated in the figure : the trajectories of , and are identical for some time after , which allows associating to the movements during : the opponent learns one additional spatiotemporal sample of .it is easy to see that -anonymity is a special case of -anonymity . as a matter of fact, the latter criterion reduces to the former when covers the whole temporal duration of the trajectory dataset .then , -anonymitycommends that each complete trajectory is indistinguishable from other trajectories , which is the definition of -anonymity .our point here is that an anonymization solution that implements -anonymitycan be straightforwardly employed to attain -anonymity as well , by properly adjusting the and parameters . in the light of these considerations ,we address the problem of achieving -anonymityin datasets of spatiotemporal trajectories of mobile subscribers . by doing so, we develop a complete anonymization solution that is effective against probabilistic attacks , but can also be specialized to guarantee -anonymity and counter record linkage attacks . in order to enforce -anonymityfor all users in the dataset, we need to tweak the spatiotemporal samples in the trajectories of individuals , so that the criterion in sec.[sub : uninf ] is respected for all of them . to that end , we rely on two elementary techniques , i.e. , _ spatiotemporal generalization _ and _ suppression _ of samples .spatiotemporal generalization reduces the precision of trajectory samples in space and time , so as to make the samples of two or more users indistinguishable .suppression removes from the trajectories those samples that are too hard to anonymize .both techniques are lossy , i.e. , imply some reduction of precision in the data . yet , unlike other approaches , these techniques conform to the ppdp requirement of truthfulness at the record level , see sec.[sub : ppdp ] .our goal is ensuring that an anonymized dataset of mobile subscriber trajectories respects the uninformative principle , by implementing , through generalization and suppression , the -anonymityof all subscriber trajectories in the dataset . clearly , we aim at doing so while minimizing the loss of spatiotemporal granularity in the data .we start by defining the basic operation of generalizing a set of spatiotemporal samples , and the associated cost in terms of loss of granularity , in sec.[sub : gen - samp ] .we then extend both notions to ( sub-)trajectories , in sec.[sub : gen - traj ] . building on these definitions ,we discuss in sec.[sub : kanon ] the optimal spatiotemporal generalization of ( sub-)trajectories .we implement the result into ` k - merge ` , an optimal low - complexity algorithm that generalizes ( sub-)trajectories with minimal loss of data granularity , in sec.[sub : kanon - impl ] . once able to merge ( sub-)trajectories optimally , we propose an approach to guarantee -anonymityof the trajectory of a single user , in sec.[sub : kte - one ] , and we then scale the solution to multiple users in sec.[sub : kte - mul ] . finally , we introduce ` kte - hide ` , an algorithm that ensures -anonymityin spatiotemporal trajectory datasets , in sec.[sub : algo ] . and into a generalized trajectory . for clarity, space is unidimensional . ]a ( raw ) _ sample _ of a spatiotemporal trajectory represents the position of a subscriber at a given time , and we model it with a length-3 real vector . since a dataset is characterized by a finite granularity in time and space , a sample is in fact a slot spanning some minimum temporal and spatial intervals .the vector entries above can be regarded as the origins of a normalized length-1 time interval and a normalized 1 two - dimensional area100 m area ) in space .however , our discussion is general , and holds for any precision in the data . ] .spatiotemporal generalization merges together two or more raw samples into a _ generalized sample _ ,i.e. , a slot with a larger span .mathematically , a generalized sample can be represented as the set of the merged samples .there is a cost associated with merging samples , which is related to the span of the corresponding generalized sample , i.e. , to the loss of granularity induced by the generalization .the cost of the operation of merging a set of samples into the generalized sample is defined as [ eq : sample_cost ] c ( g ) = c_t(g ) c_s ( g ) , where represents the cost in the time dimension , while is the cost in the space dimensions .let and be two disjoint generalized samples ( i.e. , ) .then , we make the following two assumptions on the time and space merging costs : [ eq : time_cost_prop ] c_t(g_1 g_2 ) c_t(g_1 ) + c_t ( g_2 ) [ eq : space_cost_prop ] c_s(g_1 g_2 )\{c_s(g_1 ) , c_s ( g_2 ) } .hereafter , we use the following definitions to implement the generic costs and : [ eq : time_cost_def ] c_t(g ) = t(g ) [ eq : space_cost_def ] c_s(g ) = x(g ) + y(g ) , where [ eq : sample_star ] ( g ) = _ * s*g ( * s * ) - _ * s*g ( * s * ) + 1 , with , is the span in each dimension .therefore , in our implementation , is the area of a rectangle with sides and .a graphical example is provided in fig.[fig : gen ] , where two raw samples and are merged into a generalized sample , spanning in time and in space ( portrayed as unidimensional in the figure , for the sake of readability ) .the rationale for our choice of costs is computational efficiency . also , summing the two space spans before multiplication allows balancing the time and space contributions . finally , note that with the definition in, the space merging cost assumption in is trivially true .instead , the definition in lets the time merging cost assumption in hold only if the time intervals spanned by and are non - overlapping .the time coherence property that we will introduce in sec.[sub : gen - traj ] ensures that this is the always case .a spatiotemporal ( sub-)trajectory describes the movements of a single subscriber during the dataset timespan . formally , a _ trajectory _ is an ordered vector of samples , where the ordering is induced by the time coordinate , i.e. , and only if . a _ generalized trajectory _ , obtained by merging different trajectories , is defined as an ordered vector of generalized samples . here the ordering is more subtle , and based on the fact that the time intervals spanned by the generalized samples are non - overlapping , a property that will be called _time coherence_. more precisely , if and , , are two generalized samples of , then an example of a generalized trajectory merging two trajectories and is provided in fig.[fig : gen ] . fulfils time coherence , as its generalized samples are temporally disjoint .time coherence is a defining property of generalized trajectories in ppdp . as a matter of fact ,publishing trajectory data with time - overlapping samples would generate semantic ambiguity and make analyses cumbersome .analogously to the cost of merging samples , we can define a cost of merging multiple trajectories into a generalized trajectory .we define such cost as the sum of costs of all generalized samples belonging to it . more precisely ,if , and is defined as in , then the cost of is given by : [ eq : traj_cost ] c ( * g * ) = _ i=1^z c ( g_i ) .the cost in is the overall surface covered by samples of the generalized trajectory over the spatiotemporal plane .e.g. , in fig.[fig : gen ] , the cost of is the sum of the three areas , i.e. , .it is thus proportional to the total loss of granularity induced by the generalization .we now formalize the problem of _ optimal _ generalization of spatiotemporal ( sub-)trajectories . suppose that we have trajectories , with , .the goal is a generalized trajectory from , which satisfies the following conditions . _i ) _ the union of all generalized samples of must coincide with the union of all samples of , i.e. , where .thus , is a partition of the set of all samples in the input trajectories : it does not add any alien sample or discard any input sample ._ ii ) _ each generalized sample contains at least one sample from each of the input trajectories , i.e. , this imposes that each input trajectory contributes to each generalized sample of .otherwise , the merging could associate generalized samples to users that never visited the generalized location at the generalized time , violating point 3 of the ppdp requirements in sec.[sub : ppdp ] ._ iii ) _ the cost of the merging is minimized , i.e. , [ eq : opt_kanon ] * g*^ * = _ * g*k c(*g * ) , where is the set of all partitions of satisfying time coherence as well as condition _ ii ) _ above , and is in . in fig.[fig: gen ] , the generalized trajectory fulfils all these requirements , and is thus the optimal merge of and .solving the problem above with a brute - force search is computationally prohibitive , since has a size that grows exponentially with , where denotes cardinality .however , we can characterize so that it is possible to compute it with low complexity . to that end, we name _ elementary _ a partition that can not be refined to another partition within . in other words ,none of the generalized samples of an elementary partition can be split into two generalized samples without violating conditions i ) and ii ) above , or time coherence . then, we have the following proposition . given the input trajectories , the optimal defined in is an elementary partition .* proof : * suppose is not elementary , so that it can be refined to another partition . in particular , without loss of generality , suppose that and , where [ eq : g_vs_gtilde ] g_i = \i , & i < z + _ z _ z+1 , & i = z. . from and , the difference between the costs of and is given by [ eq : diff_cost_g_gtilde ] c(*g * ) - c ( ) = c(g_z ) - c(_z ) - c(_z+1 ) . since contains the union of raw samples in and , we can apply properties and ( where holds because of time coherence ) and obtain : comparing with , we get that .thus , to search for the optimal , we can drop and keep only . if is not elementary , then we can find one of its refinements , and repeat the above steps to drop also . this way , we can drop all partitions that are not elementary and be left only with elementary partitions as candidates . and in fig.[fig : gen ] .nodes in the complete tree represent the set of valid partitions of the set of raw samples .elementary partitions are the tree leaves and constitute .the partition in fig.[fig : gen ] is the leftmost leaf in the tree . ]if we build a tree of partitions belonging to , such that the is the root and each node is a partition whose children are its refinements , the leaves are the elementary partitions , which form a subset .the above proposition states that we can limit the search of to , drastically reducing the search space of to the set of elementary partitions of .an example is provided in fig.[fig : tree ] , for the trajectories in fig.[fig : gen ] .we propose ` k - merge ` , an algorithm to efficiently search the set of raw samples , extract the subset of elementary partitions , , and identify the optimal partition . ( ) [ alg1:sc_end ] , , null ( ) [ alg1:opt_start ] $ ] [ alg1:opt_end ] the algorithm , detailed in alg.[alg : algobase ] , starts by populating a set of raw samples , whose items are ordered according to their time value ( lines [ alg1:sc_start][alg1:sc_end ] ) .then , it processes all samples according to their temporal ordering ( line [ alg1:samples ] ) .specifically , the algorithm tests , for each sample in position , all sets , with , as follows .the first loop skips incomplete sets that do not contain at least one sample from each input trajectory ( line [ alg1:inc_start ] ) .the second loop runs until the first non - elementary set is encountered ( line [ alg1:elem_start ] ) .therein , the algorithm generalizes the current ( complete and elementary ) set to , and checks if reduces the total merging cost up to .if so , the cost is updated by summing to the accumulated cost up to , and the resulting ( partial ) partition of that includes is stored ( lines [ alg1:gen_start][alg1:gen_end2 ] ) .once out of the loops , the cost associated to the last sample is the optimal cost , and it is sufficient to backward navigate the partition structure to retrieve the associated ( lines [ alg1:opt_start][alg1:opt_end ] ) .note that , in order to update the cost of including the current sample ( line [ alg1:cost ] ) , the algorithm only checks previous samples in time .it thus needs that the optimal decision up to does not depend on any of the samples in the original trajectories that come later in time than .the following proposition guarantees that this is the case .let be the optimal generalized trajectory and let us make the hypothesis that and do not belong to the same generalized sample of .let and , so that and . then , can be derived independently of .* proof : * let , and be any generalized sequences containing raw samples , and , respectively . according to the cost definition , we generally have where is the concatenation of and . however , by virtue of the hypothesis and by construction , so that , to minimize we only need to minimize and independently .the above proposition guarantees that the algorithm is exploring all possibilities , and as a result , the cost returned by ` k - merge`is optimal , i.e. , it is the minimum loss of granularity necessary to merge the original trajectories .note that ` k - merge`has a very low complexity in practical cases .let be the number of sets that are both complete and elementary for a given .then , the number of computations and comparisons of sample generalization costs that are performed in ` k - merge`is , where is the average value of . if , which happens in most trajectory data where the samples of the input trajectories are intercalated in the time axis , then ` k - merge`runs in a time , i.e. , linear in the number of samples .-anonymityfor user . ]we implement -anonymityfor a generic subscriber as shown in fig.[fig : kte - one - gen ] .we discretize time into intervals of length , named _epochs_. at the beginning of the -th epoch , we select a set of users different from , named a _ hiding set _ of and denoted as .the hiding set provides -anonymity to subscriber for a subsequent time window . by repeating the hiding set selection for all epochs , subsequent hiding sets of user overlap at any point in time .such a structure of overlapping hiding sets assures the following .first , subscriber is -anonymized for any possible knowledge of the attacker .no matter where a time interval of length is shifted to along the time dimension , it will be always completely covered by the time window of one hiding set , i.e. , a period during which s trajectory is indistinguishable from those of other users . as an example , in fig.[fig : kte - one - gen ] , the attacker knowledge ( bottom - right of the plot ) is fully enclosed in the time window of , and his sub - trajectory is indistinguishable from those of users in .second , the additional knowledge leaked to the attacker is exactly . from the first point above, the adversary can not tell apart from the users in the hiding set whose time window covers his knowledge .however , the adversary can follow the ( generalized ) trajectories of and users in for the full time window .therefore , the adversary can infer new information about the ( generalized ) trajectory of during the time window period that exceeds his original knowledge , i.e. , .e.g. , in fig.[fig : kte - one - gen ] , the time window of spans before and after the attacker knowledge , for a total of .the two guarantees above let -anonymity , as defined in sec.[sub : uninf ] , be fulfilled for the generic user . the epoch duration maps to the knowledge leakage .the following important remarks are in order ._ 1 . hiding set selection . _the structure of overlapping hiding sets is to be implemented so that the loss of accuracy in the -anonymizedtrajectory is minimized .thus , the users in the generic hiding set shall be those who , during the time window starting at the -th epoch , have sub - trajectories with minimum ` k - merge`cost with respect to s .reuse constraint . _the uninformative principle requires alternating the trajectories used in different hiding sets , as per sec.[sub : uninf ] . a simple way to enforcethis is limiting the inclusion of any subscriber in at most one hiding set of ._ 3 . generalization set ._ as evidenced by the example in fig.[fig : kte - one - gen ] , the configuration of hiding sets changes at every epoch , and hiding sets overlap during each epoch .this means that a spatiotemporal generalization must be used to merge a set of trajectories at each epoch .epoch duration tradeoff ._ the epoch duration is a configurable system parameter , whose setting gives rise to a tradeoff between knowledge leakage and accuracy of the anonymized data .a lower reduces knowledge leakage .however , it also increases , which typically entails a more marked generalization and a higher loss of data granularity . scaling-anonymityfrom a single user to all subscribers in a dataset implies that the choice of hiding sets can not be made independently for every user .therefore , trajectory similarity and reuse constraint fulfillment are not sufficient norms anymore .in addition to the above , the selection of hiding sets needs to be concerted among all users so as to ensure that the generalized trajectories are correctly intertwined and all subscribers are -anonymized during each time window .an intuitive solution is enforcing _ full consistency _ : including a subscriber into the hiding set of user at epoch makes automatically become part of s hiding set at the same epoch .formally , , .-pick constraint , with =3 , for user during the -th hiding set selection . here , hence the time windows of hiding sets span two epochs . for clarity , space is unidimensional .figure best viewed in colors . ] in fact , full consistency is an unnecessarily restrictive condition .it is sufficient that hiding set concertation satisfies a _-pick constraint _ : during the -th epoch , each user in the dataset has to be picked in the hiding sets of at least other subscribers .formally , , .this provides an increased flexibility over all existing approaches which rely on fully consistent generalization strategies .the rationale behind the -pick constraint is best illustrated by means of a toy example , in fig.[fig : kte - multi ] .the figure portrays the spatiotemporal samples of users , and during epochs and .the sub - trajectory of subscriber in this time interval is , represented as black squares ; equivalently for ( orange triangles ) and ( red circles ) .samples denoted by letters belong to other users , , and , and they are instrumental to our example . let us assume that ( i.e. , hiding sets span an interval , or epochs and ) , and . at the beginning of the -th epoch , for subscriber ( resp . , and ) , one needs to select other users that constitute the hiding set ( resp . , and ) .let us consider = , = , = , which results in the generalized sub - trajectories , , in fig.[fig : kte - multi ] .the configuration satisfies the -pick constraint for subscriber , who is picked in hiding sets , i.e. , and .suppose now that the attacker knows the spatiotemporal samples of s trajectory during any time interval within the -th and -th epoch : as these samples are within , and , then is -anonymized .the key consideration is that is -anonymized at epoch by and , yet it does not contribute to the anonymization of neither nor , as .thus , it is possible to decouple the choice of hiding sets across subscribers , without jeopardizing the privacy guarantees granted by -anonymity .such a decoupling entails a dramatic increase of flexibility in the choice of hiding sets , as per the following proposition .given a dataset of trajectories and a fixed value of , the number of hiding set configurations allowed by full consistency is a fraction of that allowed by -pick that vanishes more than exponentially for .* proof : * let us consider a set of users , where is a multiple of , since otherwise full consistency can not even be enforced .let us build a matrix , in which the -th column contains , where is the hiding set for user at a given epoch .( for simplicity , in this proof , we do not take into account the reuse constraints . )the solution set under the -pick constraint coincides with the set of normalized latin rectangles _ latin rectangle _ , , is a matrix in which all entries are taken from the set , in such a way that each row and column contains each value at most once .the latin rectangle is said to be normalized if the first row is the ordered set .] of size .let be the number of normalized latin rectangles , which equals the number of possible solutions for our problem with the -pick constraint .an old result by erds and kaplansky states that , for and , k_k , u ~(u!)^k-1 ( -k(k-1)/2 ) if , instead , we enforce full consistency , then the number of solutions equals the number of different partitions of a size- set into subsets , all with size .denoting by this number , we can compute it as c_k , u = = thus , for fixed and which tends to zero more than exponentially for . for large datasets of hundreds of thousands trajectories, -pick enables a much richer choice of merging configurations .this reasonably unbinds better combinations of the original trajectories , and results in more accurate anonymized data .capitalizing on all previous results , we design ` kte - hide ` , an algorithm that achieves -anonymityin datasets of spatiotemporal trajectories . since even the optimal solution to the simpler -anonymity problem is known to be np - hard , we resort here to an heuristic solution .[ alg2:merge_end ] the algorithm , in alg.[alg : algokte ] , proceeds on a per - epoch basis ( line [ alg2:epoch ] ) , finding , for each epoch , a set of users ( with defined as in sec.[sub : kte - one ] ) that hide each subscriber at low merging cost .an extensive search for the set of users would have an excessive cost , where is the number of users in dataset , and .thus , we adopt a computationally efficient approach , by clustering user sub - trajectories based on their pairwise merging cost .costs are computed via ` k - merge`(lines [ alg2:epoch_start][alg2:epoch_end2 ] ) , and a standard spectral clustering algorithm groups similar trajectories into same clusters ( line [ alg2:sc ] ) .this allows operating on each cluster independently in the following .starting from epoch ( line [ alg2:epoch_min ] ) , the algorithm processes each identified cluster at epoch separately ( line [ alg2:cluster ] ) .it splits the current cluster into subsets , which contain user trajectories that share the same sequence of clusters during the last epochs ( line [ alg2:split ] ) .let be any of such subsets : is mapped to a directed graph whose nodes are the users within , and there is an edge going from user to user if can be in the hiding set of without violating the reuse constraint ( line [ alg2:graph ] ) . if a -anonymity level is required , directional cycles are then built within the graph , involving all nodes in the graph , in such a way that each node has a different parent in each cycle ( line [ alg2:greedycycle ] ) .the hiding set is then obtained as the set of user s parents in the cycles ( lines [ alg2:hv_start][alg2:hv_end2 ] ) .such a construction of hiding sets complies with the -pick constraint , since every user is in the hiding set of other users .it may however happen that no valid cycles can be created within : this means that subscribers in share a sub - trajectory that is rare in the dataset , and their number is insufficient to implement -anonymity . in this case, we apply suppression and remove all spatiotemporal samples of such users sub - trajectories ( line [ alg2:suppression ] ) .once all hiding sets are determined , the merging is performed , on each epoch and for each user , using ` k - merge`(lines [ alg2:merge_start][alg2:merge_end2 ] ) .overall , the heuristic algorithm above guarantees that overlapping hiding sets that satisfy the reuse constraint ( sec.[sub : kte - one ] ) are selected for all users .it also ensures that such a choice of hiding sets fulfils the -pick requirement ( sec.[sub : kte - mul ] ) .together , these conditions realize -anonymityof the trajectory data . the complexity of ` kte - hide`is as follows .let be the number of users , be the number of epochs and be the average number of samples per user per epoch , so that is the total number of samples in the dataset . then : _( i ) _ lines [ alg2:epoch_start][alg2:epoch_end2 ] perform ` k - merge`on two input trajectories times , each of them with a complexity , for a total complexity of ; _ ( ii ) _ spectral clustering ( line [ alg2:sc ] ) can be implemented with complexity using kasp ; _ ( iii ) _ the complexity of lines [ alg2:merge_start][alg2:merge_end2 ] , performing ` k - merge`on input trajectories times , is .all other subroutines of ` kte - hide`have a much smaller complexity .[ tbl : dataset_stats ] [ cols="^ , > , > , > , > , > , > , > " , ]we evaluate our anonymization solutions with five real - world datasets of mobile subscriber trajectories , introduced in sec.[sub : datasets ] . a comparative evaluation of ` k - merge`is in sec.[sub : peva - algobase ] , while the results of -anonymizationvia ` kte - hide`are presented in sec.[sub : peva - algokte ] .our datasets consist of user trajectories extracted from call detail records ( cdr ) released by orange within their d4d challenges , and by the university of minnesota .three datasets , denoted as ` abi ` , ` dak ` and ` shn ` , describe the spatiotemporal trajectories of tens of thousands mobile subscribers in urban regions , while the other two , ` civ ` and ` sen ` hereinafter , are nationwide . in all datasets , user positions map to the latitude and longitude of the current base station ( bs ) they are associated to . the main features of the datasets are listed in tab.[tbl : dataset_stats ] , revealing the heterogeneity of the scenarios . in order to ensure that all datasets yield a minimum level of detail in the trajectory of each tracked subscriber, we had to preprocess the ` abi ` and ` civ ` datasets .specifically , we only retained those users whose trajectories have at least one spatiotemporal sample on every day in a specific two - week period .no filtering was needed for the ` dak ` and ` sen ` datasets , which already contain users who are active for more than 75% of a 2-week timespan , and ` shn ` , whose users have even higher sampling rates . in all datasets , user positions map to the latitude and longitude of the current base station ( bs ) they are associated to .we discretized the resulting positions on a 100-m regular grid , which represents the finest spatial granularity we consider .samples are timestamped with an precision of one minute .this is the granularity granted in the ` abi ` and ` civ ` datasets . the ` dak ` and ` sen ` datasets feature a temporal granularity of 10 minutes : in order to have comparable datasets , we added a random uniform noise over a ten - minute timespan to each sample , so as to artificially refine the time granularity of the data to one minute as well .in the case of the ` shn ` dataset , the precision is one second , and we used a one - minute binning to uniform the data to the standard format . since no previous solution for -anonymityexists , we are forced to compare our algorithms to previous techniques in terms of simpler -anonymity .interestingly , this allows validating our proposed approach for merging spatiotemporal trajectories via the ` k - merge`algorithm .we thus run ` k - merge`on 100 random -tuples of mobile users from the reference datasets , for different values of , and we record the spatiotemporal granularity retained by the resulting generalized trajectories .we compare our results against those obtained by the only three approaches proposed in the literature for the -anonymization of trajectories along both spatial and temporal dimensions .the first is static generalization , which consists in a homogeneous reduction of data granularity , decided arbitrarily and imposed on all user trajectories .static generalization is a trial - and - error process , and it does not guarantee -anonymity of all users .the second benchmark solution is wait for me ( w4 m ) .intended for regularly sampled ( e.g. , gps ) trajectories , w4 m performs the minimum spatiotemporal translation needed to push all the trajectories within the same cylindrical volume .it allows the creation of new synthetic samples , and it is thus not fully compliant with ppdp principles in sec.[sub : ppdp ] .the latter operation is leveraged to improve the matching among trajectories in a cluster , and assumes that mobile objects ( i.e. , subscribers in our case ) effectuate linear constant - speed movements between spatiotemporal samples .we use w4 m with linear spatiotemporal distance ( w4m - l ) , i.e. , the version intended for large databases such as those we consider , and configure it with the settings suggested in .the third approach is glove , which relies on a heuristic measure of anonymizability to assess the similarity of spatiotemporal trajectories .this measure is fed to a greedy algorithm to achieve -anonymity with limited loss of granularity and without introducing fictitious data . however , unlike ` k - merge ` , glove does not provide an optimal solution , and is computationally expensive .the results of our comparative evaluation are summarized in tab.[tab : algobase ] , for the ` abi ` and ` dak ` datasets , when varying number of trajectories merged together .similar results were obtained for the other datasets , and are omitted due to space limitations .we immediately note how static aggregation is an ineffective approach : the percentage of successfully merged -tuples is well below 100% , even when dramatically reducing the data granularity to 8 hours in time and 20 km in space .instead , ` k - merge ` , w4 m and glove can merge all of the -tuples , while retaining a good level of accuracy in the data .we can directly compare the granularity in time ( min ) and space ( km ) retained by ` k - merge ` , w4 m and glove in merging groups of trajectories : the spatiotemporal accuracy is comparable in all cases .however , it is important to note that w4 m attains this result by deleting and creating a significant amount of samples : in the end , only 40 - 70% of the original samples are maintained in the generalized data .conversely , all of the generalized samples created by ` k - merge`reflect the actual real - world data . also , `k - merge`obtains a level of precision that is always higher than that of glove , and scales better : indeed , the complexity of glove did not allow computing a solution when .overall , the results uphold ` k - merge`as the current state - of - the - art solution to generalize sparse spatiotemporal trajectories while obeying ppdp principles and minimizing accuracy loss .we run ` kte - hide`on our reference datasets of mobile subscriber trajectories , so that they are -anonymized . as the anonymized data are robust to probabilistic attacks by design , we focus our evaluation on the cost of the anonymization , i.e. , the loss of granularity .all results refer to the case of -anonymization , with .fig.[fig : gran_vs_t ] portrays the mean , median and first / third quartiles of the sample granularity in the -anonymizedcitywide datasets ` abi ` , ` dak ` and ` shn ` .the plots show how results vary when the adversary knowledge ranges from 10 minutes to 4 hours higher than one hour .indeed , a too close to the full dataset duration implies that the opponent has an a - priori knowledge of the victim s trajectory that is comparable to that contained in the data , making attempts at countering a probabilistic attack futile . ] .they refer to the anonymized data granularity in space is expressed as the sum of spans along the cartesian axes . for instance , 1 km maps to , e.g. , a square of side 500 m. ] , in fig.[fig : pos_gran_vs_t_abi]- and time , in fig.[fig : tim_gran_vs_t_abi]- .we remark how the -anonymizeddatasets retain significant levels of accuracy , with a median granularity in the order of 1 - 3 km in space and below 45 minutes in time .these levels of precision are largely sufficient for most analyses on mobile subscriber activities , as discussed in , e.g. , .the temporal granularity is negatively affected by an increasing adversary knowledge , which is expected .interestingly , however , the spatial granularity is only marginally impacted by : protecting the data from a more knowledgeable attacker does not have a significant cost in terms of spatial accuracy .fig.[fig : gran_vs_t_nation ] shows equivalent results for the nationwide datasets ` civ ` and ` sen ` .the evolution of temporal granularity versus , in fig.[fig : tim_gran_vs_t_civ]- is consistent with citywide scenarios .differences emerge in terms of spatial granularity : in the ` civ ` case ( fig.[fig : pos_gran_vs_t_civ ] ) a reversed trend emerges , as accuracy grows along with the attacker knowledge .this counterintuitive result is explained by the thin user presence in the ` civ ` dataset : as per tab.[tbl : dataset_stats ] , ` civ ` has a density of subscribers per km that is one or two orders of magnitude lower than those in our other reference datasets .such a geographical sparsity makes it difficult to find individuals with similar spatial trajectories : increasing has then the effect of enlarging the set of candidate trajectories for merging at each epoch , with a positive influence on the accuracy in the generalized data .these considerations are confirmed by the results with the ` sen ` dataset ( fig.[fig : pos_gran_vs_t_sen ] ) . as per tab.[tbl : dataset_stats ] , this dataset features a subscriber density that is about one order of magnitude higher than that of ` civ ` , but around one order of magnitude lower than those of the ` abi ` , ` dak ` and ` shn ` .coherently , the spatial granularity trend falls in between those observed for such datasets , and it is not positively or negatively impacted by the attacker knowledge .more generally , the results in fig.[fig : gran_vs_t_nation ] demonstrate that ` kte - hide`can scale to large - scale real - world datasets .the absolute performance is good , as the -anonymizeddata retains substantial precision : the median levels of granularity in space and time are comparable to those achieved in citywide datasets . finally , we remark that , in all cases , the amount of samples suppressed by ` kte - hide`is in the 1%7% range . the amount of samples suppressed by ` kte - hide`in the -anonymizationprocess is portrayed in fig.[fig : suppr_samples ] .we note that resorting to suppression becomes more frequent as the adversary knowledge increases . however , even when the opponent is capable of tracking a user during four continued hours , the percentage of suppressed samples remains low , typically well below 10% .moreover , the trend in the long - timespan datasets is clearly sublinear , suggesting that suppression does not become prevalent with higher .results are fairly consistent across citywide datasets = 1 hour in ` shn ` is due to the fact that the time interval is already very large , at around the same order of magnitude of the full dataset duration . ] .nationwide datasets are also aligned , and yield even lower suppression rates , at around 2% .this difference is explained by the fact that a larger number of users allows for a more efficient spectral clustering in ` kte - hide ` . as an intriguing concluding remark , fig.[fig : abi_trend ] reveals a clear circadian rhythm in the granularity of -anonymizeddata , as well as in the percentage of suppressed samples .the plots refer to one sample week in the ` abi ` and ` dak ` datasets , when = 30 min , but consistent results were observed in all of our reference datasets .specifically , the mean spatial granularity , in fig.[fig : pos_gran_trend_abi ] , is much finer during daytime , when subscribers are more active and the volume of trajectories is larger : here , it is easier to hide a user into the crowd .overnight displacements are instead harder to anonymize , since subscribers are limited in number and they tend to have diverse patterns .this is also corroborated by the significantly higher suppression of samples between midnight and early morning , in fig.[fig : blanked_trend_abi ] .time granularity , in fig.[fig : time_gran_trend_abi ] , is less subject to day - night oscillations : the slightly higher accuracy recorded at night is an artifact of the important relative suppression of samples at those times .overall , our results show that ` kte - hide`attains -anonymityof real - world datasets of mobile traffic , while maintaining a remarkable level of accuracy in the data .interestingly , its performance is better when most needed , at daytime , when the majority of human activities take place .protection of individual mobility data has attracted significant attention in the past decade . however , attack models and privacy criteria are very specific to the different data collection contexts .hence , solutions developed for a specific type of movement data are typically not reusable in other environments .for instance , a vast amount of works have targeted user privacy in location - based services ( lbs ) . there , the goal is ensuring that single georeferenced queries are not uniquely identifiable .this is equivalent to anonymizing each spatiotemporal sample independently , and a whole other problem from protecting full trajectories .even when considering sequences of queries , the lbs milieu allows pseudo - identifier replacement , and most solutions rely on this approach , see , e.g. , .if applied to spatiotemporal trajectories , these techniques would seriously and irreversibly break up trajectories in time , disrupting data utility .another popular context is that of spatial trajectories that do not have a temporal dimension .the problem of anonymizing datasets of spatial trajectories has been thoroughly explored in data mining , and many practical solutions based on generalization have been proposed , see , e.g. , .such solutions are not compatible with or easily extended to the more complex spatiotemporal data we consider .some works explicitly target privacy preservation of spatiotemporal trajectories .however , the precise context they refer to makes again all the difference .first , most such solutions consider scenarios where user movements are sampled at regular time intervals that are identical for all individuals , or where the number of samples per device is very small .these assumptions hold , e.g. , for gps logs or rfid record , but not for trajectories recorded by mobile operators : the latter are irregularly sampled , temporally sparse , and cover long time periods , which results in at least hundreds of samples per user .second , many of the approaches above disrupt data utility , by , e.g. , trimming trajectories , or violate the principles of ppdp , by , e.g. , perturbating or permutating the trajectories , or creating fictitious samples .third , all previous studies aim at attaining -anonymity of spatiotemporal trajectories , i.e. , they protect the data against record linkage ; this includes recent work specifically tailored to mobile subscriber trajectory datasets . as explained in sec.[sec : reqs ] , -anonymity is only a partial countermeasure to attacks on spatiotemporal trajectories .provable privacy guarantees are instead offered by _differential privacy _ , which commends that the presence of a user s data in the published dataset should not change substantially the output of the analysis , and thus formally bounds the privacy risk of that user .there have been attempts at using differential privacy with mobility data .specifically , it has been successfully used the in the lbs context , when publishing aggregate information about the location of a large number of users , see , e.g. , .however , the requirements of these solutions already become too strong in the case of individual lbs access data . to address this problem , a variant of differential privacy , named_ geo - indistinguishability _ has been introduced : it requires that any two locations become more indistinguishable as they are geographically closer .practical mechanisms achieve geo - indistinguishability , see , e.g. , .however , all refer to the anonymization of single lbs queries : as of today , differential privacy and its derived definitions still appear impractical in the context of spatiotemporal trajectories .in this paper , we presented a first ppdp solution to probabilistic and record linkage attacks against mobile subscriber trajectory data . to that end, we introduced a novel privacy model , -anonymity , which generalizes the popular criterion of -anonymity .our proposed algorithm , ` kte - hide ` , implements -anonymityin real - world datasets , while retaining substantial spatiotemporal accuracy in the anoymized data .m. t. asif , n. mitrovic , j. dauwels , p. jaillet , _`` matrix and tensor based methods for missing data estimation in large traffic networks , '' _ ieee transactions on its , 17(7 ) , 2016 .g. czibula , a. m. guran , i. g. czibula , g. s. cojocar , _`` ipa - an intelligent personal assistant agent for task performance support , '' _ ieee iccp , 2009 .
mobile network operators can track subscribers via passive or active monitoring of device locations . the recorded trajectories offer an unprecedented outlook on the activities of large user populations , which enables developing new networking solutions and services , and scaling up studies across research disciplines . yet , the disclosure of individual trajectories raises significant privacy concerns : thus , these data are often protected by restrictive non - disclosure agreements that limit their availability and impede potential usages . in this paper , we contribute to the development of technical solutions to the problem of privacy - preserving publishing of spatiotemporal trajectories of mobile subscribers . we propose an algorithm that generalizes the data so that they satisfy -anonymity , an original privacy criterion that thwarts attacks on trajectories . evaluations with real - world datasets demonstrate that our algorithm attains its objective while retaining a substantial level of accuracy in the data . our work is a step forward in the direction of open , privacy - preserving datasets of spatiotemporal trajectories .
code - based cryptography relies crucially on the hardness of decoding generic linear codes .this problem has been studied for a long time and despite many efforts on this issue the best algorithms for solving this problem are exponential in the number of errors that have to be corrected : correcting errors in a binary linear code of length has with the aforementioned algorithms a cost of where is a constant depending of the code rate and the algorithm .all the efforts that have been spent on this problem have only managed to decrease slightly this exponent .let us emphasize that this exponent is the key for estimating the security level of any code - based cryptosystem .all the aforementioned algorithms can be viewed as a refinement of the original prange algorithm and are actually all referred to as isd algorithms .there is however an algorithm that does not rely at all on prange s idea and does not belong to the isd family : statistical decoding proposed first by al jabri in and improved a little bit by overbeck in .later on , proposed an iterative version of this algorithm .it is essentially a two - stage algorithm , the first step consisting in computing an exponentially large number of parity - check equations of the smallest possible weight , and then from these parity - check equations the error is recovered by some kind of majority voting based on these parity - check equations .however , even if the study made by r. overbeck in lead to the conclusion that this algorithm did not allow better attacks on the cryptosystems he considered , he did not propose an asymptotic formula of its complexity that would have allowed to conduct a systematic study of the performances of this algorithm . such an asymptotic formula has been proposed in through a simplified analysis of statistical decoding , but as we will see this analysis does not capture accurately the complexity of statistical decoding .moreover both papers did not assess in general the complexity of the first step of the algorithm which consists in computing a large set of parity - check equations of moderate weight .the primary purpose of this paper is to clarify this matter by giving three results .first , we give a rigorous asymptotic study of the exponent of statistical decoding by relying on asymptotic formulas for krawtchouk polynomials .the number of equations which are needed for this method turns out to be remarkably simple for a large set of parameters . in theorem [ biassdecoding ]we prove that the number of parity check equations of weight that are needed in a code of length to decode errors is of order ( when we ignore polynomial factors ) and this as soon as .for instance , when we consider the hardest instances of the decoding problem which correspond to the case where the number of errors is equal to the gilbert - varshamov bound , then essentially our results indicate that we have to take _ all _ possible parity - checks of a given weight ( when the code is assumed to be random ) to perform statistical decoding .this asymptotic study also allows to conclude that the modeling of iterative statistical decoding made in is too optimistic .second , inspired by isd techniques , we propose a rather efficient method for computing a huge set of parity - check equations of rather low weight .finally , we give a lower bound on the complexity of this algorithm that shows that it can not improve upon prange s algorithm for the hardest instances of decoding .this lower bound follows by observing that the number of the parity - check equations of weight that are needed for the second step of the algorithm is clearly a lower - bound on the complexity of statistical decoding .what we actually prove in the last part of the paper is that irrelevant of the way we obtain these parity - check equations in the first step , the lower bound on the complexity of statistical decoding coming from the infimum of these s is always larger than the complexity of the prange algorithm for the hardest instances of decoding .as our study will be asymptotic , we neglect polynomial factors and use the following notation : let , we write iff there exists a polynomial such that . moreover, we will often use the classical result where denotes the binary entropy .we will also have to deal with complex numbers and follow the convention of the article we use here : is the imaginary unit satisfying the equation , is the real part of the complex number and we choose the branch of the complex logarithm with ,\ ] ] and .in the whole paper we consider the computational decoding problem which we define as follows : given a binary linear code of length of rate , a word at distance from the code , find a codeword such that where denotes the hamming distance .generally we will specify the code by an arbitrary generator matrix and we will denote by csd a specific instance of this problem . we will be interested as is standard in cryptography in the case where _ is supposed to be random_. the idea behind statistical decoding may be described as follows .we first compute a very large set of parity - check equations of some weight and compute all scalar products ( scalar product is modulo ) for .it turns out that if we consider only the parity - checks involving a given code position the scalar products have a probability of being equal to which depends whether there is an error in this position or not . therefore counting the number of times when allows to recover the error in this position .let us analyze now this algorithm more precisely . to make this analysis tractablewe will need to make a few simplifying assumptions .the first one we make is the same as the one made by r. overbeck in , namely that [ ass : one ] the distribution of the s when is drawn uniformly at random from the dual codewords of weight is approximated by the distribution of when is drawn uniformly at random among the words of weight .a much simpler model is given in and is based on modeling the distribution of the s as the distribution of where the coordinates of are i.i.d . and distributed as a bernoulli variable of parameter .this presents the advantage of making the analysis of statistical decoding much simpler and allows to analyze more refined versions of statistical decoding .however as we will show , this is an oversimplification and leads to an over - optimistic estimation of the complexity of statistical decoding .the following notation will be useful .+ denotes the set of binary of words of length of weight ; + ; + ; + ; + means that follows a bernoulli law of parameter ; + means we pick uniformly at random in .we start the analysis of statistical decoding by computing the following probabilities which approximate the true probabilities we are interested in ( which correspond to choosing uniformly at random in and not in ) under assumption [ ass : one ] these probabilities are readily seen to be equal to they are independent of the error and the position .so , in the following we will use the notation and .we will define the biases and of statistical decoding by it will turn out , and this is essential , that .we can use these biases `` as a distinguisher '' .they are at the heart of statistical decoding .statistical decoding is nothing but a statistical hypothesis testing algorithm distinguishing between two hypotheses : based on computing the random variable for uniform and independent draws of vectors in : we have according to .so the expectation of is given under by : we point out that we have regardless of the term . in order to apply the following proposition, we make the following assumption : [ ass : two ] are independent variables .[ chernoff s bound] let , i.i.d and we set .then , * consequences : * under , we have to take our decision we proceed as follows : if where we choose and if not . for the cases of interest to us ( namely and linear in ) the bias is an exponentially small function of the codelength and it is obviously enough to choose to be of order to be able to make the good decisions on all positions simultaneously ._ on the optimality of the decision ._ all the arguments used for distinguishing both hypotheses are very crude and this raises the question whether a better test exists .it turns out that in the regime of interest to us , namely and linear in , the term is of the right order . indeed our statistical test amounts actually to the neymann - pearson test ( with a threshold in this case which is not necessarily in the middle , i.e. equal to ) . in the case of interest to us , the bias between both distributions is exponentially small in and chernoff s bound captures accurately the large deviations of the random variable .now we could wonder whether using some finer knowledge about the hypotheses and could do better .for instance we know the a priori probabilities of these hypotheses since . it can be readily verified that using bayesian hypothesis testing based on the a priori knowledge of the a priori probabilities of both hypotheses does not allow to change the order of number of tests which is still when and are linear in .statistical decoding is a randomized algorithm which uses the previous distinguisher .as we just noted , this distinguisher needs parity - check equations of weight to work .this number obviously depends on and and we use the notation : .now we have two frameworks to present statistical decoding .we can consider the computation of parity - check equations as a pre - computation or to consider it as a part of the algorithm . to consider the case of pre - computation , simply remove line of algorithm 1 and consider the s as an additional input to the algorithm . `paritycheckcomputation` will denote an algorithm which for an input outputs vectors of . /*_error/*_auxiliary algorithm_*/ clearly statistical decoding complexity is given by * when the s are already stored and computed : ; * when the s have to be computed : where stands for the complexity of the call ` paritycheckcomputation` .as explained in introduction , our goal is to give the asymptotic complexity of statistical decoding .we introduce for this purpose the following notations : ; .the two following quantities will be the central object of our study .we define the asymptotic complexity of statistical decoding when the s are already computed by whereas the asymptotic complexity of the complete algorithm of statistical decoding ( including the computation of the parity - check equations ) is defined by one could wonder why these quantities are defined as infimum limits and not directly as limits .this is due to the fact that in certain regions of the error weight and parity - check weights the asymptotic bias may from time to time become much smaller than it typically is .this bias is indeed proportional to values taken by a krawtchouk polynomial and for certain errors weights and parity - check weights we may be close to the zero of the relevant krawtchouk polynomial ( this corresponds to the second case of theorem [ th : expansion ] ) .we are looking for explicit formulas for and .the second quantity depends on the algorithm which is used .we will come back to this issue in subsection [ fram ] . for our purposewe will use krawtchouk polynomials and asymptotic expansions for them coming from .let be a positive integer , we recall that the krawtchouk polynomial of degree and order , is defined for by : these krawtchouk polynomials are readily related to our biases .we can namely observe that to recast the following evaluation of a krawtchouk polynomial as we have a similar computation for let us recall theorem 3.1 in .[ th : expansion ] let and be three positive integers .we set and .we assume .let has two solutions and which are the two roots of the equation .let and .the two roots are equal to and is defined to be root .there are two cases to consider * in the case , is positive , is a real negative number and we can write where and .* in the case , is negative , is a complex number and we have where denotes the imaginary part of the complex number , denotes a function which is uniformly in , and .the asymptotic formulas hold uniformly on the compact subsets of the corresponding open intervals .note that strictly speaking is incorrectly stated in ( * ? ? ?the problem is that ( 3.20 ) is incorrect in , since both and are negative and taking a square root of these expressions leads to a purely imaginary number in ( 3.20 ) .this can be easily fixed since the expression which is just above ( 3.20 ) is correct and it just remains to take the imaginary part correctly to derive .it will be helpful to use the following notation from now on . and for we define the following quantities we are now going use these asymptotic expansions to derive explicit formulas for .we start with the following lemma .[ lem : real ] with the hypothesis of proposition just above , we have from and we have by using theorem [ th : expansion ] we obtain when plugging the asymptotic expansions of the krawtchouk polynomials into we clearly have and and therefore from the particular form of we deduce that we observe now that and therefore it is insightful to express the term as the point is that and where .therefore using this in and then in implies the lemma . from this lemma we can deduce that [ lem : realcomplete ] assume and for have we have where we used in the second case corresponding to is handled by the following lemma ( note that it is precisely the `` sin '' term that appears in it that lead us to define as an infimum limit and not as a limit ) [ lem : complex ] when for we have where and .the proof of this lemma is very similar to the proof of lemma [ lem : real ] . fromand we have by plugging the asymptotic expansion of krawtchouk polynomials given in theorem [ th : expansion ] into we obtain where the s are functions which are of order uniformly in .we clearly have and and therefore from the particular form of we deduce that from this we deduce that we now observe that where follows from the observation recall that where and that the point is that and therefore using this in and then multiply by implies we can substitute for this expression in and obtain recall that by using this in we obtain from lemmas [ lem : realcomplete ] and [ lem : complex ] we deduce immediately that [ cor : biassdecoding ] we set , * if : * if : these asymptotic formulas turn out to be already accurate in the `` cryptographic range '' as it is shown in figure [ fig : numbias ] .amazingly enough these formulas can be simplified a lot in the second case of the corollary as shown by the following theorem .[ biassdecoding ] + * if : where is the smallest root of . *if : the first case is just a slight rewriting . to prove the formula corresponding to the second case let us recall that the that appears in the second case of corollary [ cor : biassdecoding ] satisfies where let let us first differentiate this expression with respect to : since with , we deduce that substituting this expression for in yields we continue the proof by differentiating now with respect to : recall that is also given by one of the two roots of ( see theorem [ th : expansion ] for the root which is actually chosen ) and therefore from this we deduce that these two results on the derivative imply that for some constant which is easily seen to be equal to by letting go to and go to in . introduced another model for the parity - check equations used in statistical decoding . instead of assuming that they are chosen randomly of a given weight , the authors of assume that they are random binary words of length where the entries are chosen independently of each other according to a bernoulli distribution of parameter .in other words , the expected weight is still but the weight of the parity - check equation is not fixed anymore and may vary .we will call it the _ binomial model _ of weight and length and refer to our model as the constant weight model of weight .the binomial model presents the advantage of simplifying significantly the analysis of statistical decoding .it is easy to analyze the simple statistical decoding algorithm that we consider here and to compute asymptotically the number of parity - check equations that ensure successful decoding .we will do this in what follows .but the authors of went further since they were even able to analyze asymptotically an iterative version of statistical decoding by following some of the ideas of .they showed that in the binomial model of weight and length , the number of check sums that are necessary to correct with large enough probability errors by using the iterative decoding algorithm of is well estimated by with where the constant in the `` big o '' depends on the ratio .let us first show that naive statistical decoding performs almost as well when we forget about polynomial factors .it makes sense in order to compare both models to introduce some additional notation . where is a parity - check equation chosen according to the binomial model and the probability is taken over the random choice of in this model ( and means that we take the probabilities according to the binomial model ) .these quantities do not depend on .it will also be convenient to define and as the computations of ( * ? ? ? * sec ii .b ) show that this implies that it is also convenient in order to distinguish both models to rename the quantities , , and that were introduced before by referring to them as , , and respectively .we can perform the same statistical test as before by computing from parity - check equations all involving the bit we want to decode , the quantity the expectation of this quantity is depending on the value of the bit we want to decode .we decide that the bit we want to decode is equal to if and otherwise .as before , we observe that by chernoff s bound we make a wrong decision with probability at most .this probability can be made to be of order by choosing as for a suitable constant . in this case , decoding the whole sequence succeeds with probability .in other words , naive statistical decoding succeeds for .we may observe now that this means that naive statistical decoding needs only marginally more equations in the binomial model ( namely a multiplicative factor of order ) . to summarize the whole discussion ,the number of parity - checks needed for decoding is * with iterative statistical decoding over the binomial model * with naive statistical decoding over the binomial model * with naive statistical decoding over the constant weight model one might wonder now whether there is a difference between both models .it is very tempting to conjecture that both models are very close to each other since the expected weight of the parity - checks is in both cases .however this is not the case , we are really in a large deviation situation where the bias of some extreme weights take over the bias corresponding to the typical weight of the parity check equations .to illustrate this point , we choose the weight to be , the number of errors as for some fixed and , and then let go to infinity . the normalized exponent and we mean here the coefficient .] of the number of parity - check equations which is needed is in the binomial case , whereas is given by theorem [ biassdecoding ] in the constant weight case and both terms are indeed different in general .one case which is particularly interesting is when and are chosen as and , where is the code rate we consider .this corresponds to the hardest case of syndrome decoding and when the parity - check equations of this weight can be easily obtained as we will see in section [ sec : naive ] .the two normalized exponents are compared on figure [ fig : kf ] as a function of the rate . as we see, there is a huge difference .the problem with the model chosen in is that it is a very favorable model for statistical decoding . to the best of our knowledgethere are no efficient algorithms for producing such parity - checks when .note that even such an algorithm were to exist , selecting appropriately only one weight would not change the exponential complexity of the algorithm ( this will be proved in section [ sec : single ] ) . in other words , in order to study statistical decoding we may restrict ourselves , as we do here , to considering only one weight and not a whole range of weights .the difference between both formulas is even more apparent when considering the slopes at the origin as shown in figure [ fig : kfs ] .however both models get closer when the error weight decreases .for instance when considering a relative error , we see in figure [ fig : kfdgv2 ] that the difference between both models gets significantly smaller .actually the difference vanishes when the relative error tends to , as shown by proposition [ subcpx ] .[ asymptotic complexity of statistical decoding for a sub - linear error weight][subcpx] as decreases to , we consider for the first formula which is given in theorem [ biassdecoding ] .we have : with let us compute now taylor series expansion of when .we start with now using the fact that : we have : and we deduce that : and therefore now using the fact that : we have the asymptotic expansions with the logarithms : so we deduce that : so by plugging this expression with in we have the result .the sublinear case is also relevant to cryptography since several mceliece cryptosystems actually operate at this regime , this is true for the original mceliece system with fixed rate binary goppa codes or with the mdpc - mceliece cryptosystem . in this regime , showed that all isd algorithms have the same asymptotic complexity when the number of errors to correct is equal to and this is given by : let us compare the exponents of statistical decoding and the isd algorithms when we want to correct a sub - linear error weight .when the complexity we are after is subsexponential in the length . the only algorithm finding moderate weight parity - check equations in subexponential time we found is algorithm [ alg : gauss ] .it produces parity - check equations of weight in amortized time .so with this algorithm , the exponent of statistical decoding is given by which is twice the exponent of all the isds .we did not conclude for a relative weight as in any case , all the algorithms we found needed exponential time to output enough equations to perform statistical decoding .so unless one comes up with an algorithm that is able to produce parity - check equations of relative weight in subexponential time , statistical decoding is not better that any isds when we have to correct errors .the previous section showed that if it is much more favorable when it comes to perform statistical decoding to produce parity - check equations following the binomial model of weight rather than parity - checks of constant weight .the problem is that as far as we know , there is no efficient way of producing moderate weight parity - check equations ( let us say that we call moderate any weight ) which would follow such a model .even the `` easy case '' , where and where it is trivial to produce such equations by simply putting the parity - check matrix in systematic form and taking rows in this matrix , ] does not follow the binomial model : the standard deviation of the parity - check equation weight is easily seen to be different between what is actually produced by the algorithm and the binomial model of weight .of course , this does not mean that we should rule out the possibility that there might exist such efficient algorithms . we will however prove that under very mild conditions , that even such an algorithm were to exist then anyway it would produce by nature parity - checks of different weights and that we would have a statistical decoding algorithm of the same exponential complexity which would keep only _one very specific weight_. in other words , it is sufficient to care about the single weight case as we do here when we study just the exponential complexity of statistical decoding . to verify this , we fix an arbitrary position we want to decode and assume that some algorithm has produced in time , parity check equations involving this position where denotes the number of parity - check equations of weight .the equations of weight are denoted by .statistical decoding is based on simple statistics involving the values . to simplify a little bit the expressions we are going to manipulate , let us introduce similarly to assumptions [ ass : one ] and [ ass : two ], we assume that the distribution of is approximated by the distribution of when is drawn uniformly at random among the words of weight and the s are independent .so we have under the hypothesis and is the bias defined in subsection [ bias ] for a weight .our aim now is to find a test distinguishing both hypotheses and . as in subsection [ bias ] it will be the neymann - pearson test .we define the following quantity where denotes the probability under the hypothesis : the lemma of neymann - pearson tells to us to proceed as follows : if , where is some threshold , choose and otherwise . in this case, no other statistic test will lead to lower false detection probabilities at the same time . in our case, it is enough to set the threshold to since it can be easily verified that no other choices will not change the exponent of the number of samples we need for having vanishing false detection probabilities .we set , and , we have : therefore by taking the natural logarithm of this expression and and , we have : + i_{1}(j ) \left [ \ln(p_{0}(j ) ) - \ln(p_{1}(j ) ) \right ] \\ & = \sum_{j=1}^{n } ( m_{j } - i_{1}(j ) ) \left [ \ln(1-p_{0}(j ) ) - \ln(1-p_{1}(j ) ) \right ] + \sum_{s=1}^{m_{j } } x_{s}^{j } \left [ \ln(p_{0}(j ) ) - \ln(p_{1}(j ) ) \right ] \\ & = \sum_{j=1}^{n } \sum_{s=1}^{m_{j } } x_{s } \left [ \ln(p_{0}(j ) ) - \ln(1-p_{0}(j ) ) + \ln(1-p_{1}(j ) ) - \ln(p_{1}(j ) ) \right ] \\ &\qquad \qquad + m_{j } \ln \frac{1-p_{0}(j)}{1-p_{1}(j ) } \end{aligned}\ ] ] we now use the taylor series expansion around : and we deduce for in : we have , where and is the constant defined by : this computation suggests to use the random variables to build our distinguisher with the neyman - pearson likelihood test . by the assumptions on the s , the sare independent and we have under : the expectation of under is given by : as for our previous distinguisher we define the random variable for uniform and independent draws of vectors in : the expectation of depends on which hypothesis holds .when hypothesis holds , we denote the expecation of by .the difference is given by : the deviations of around its expectation will be quantified through hoeffding s bound which gives in this case up to constant factors in the exponent the right behavior of the probability that deviates from its expectation let independent random variables , and with such that : we set , then : in order to distinguish both hypotheses , we set .so under , we have we decide that hypothesis holds if and that holds otherwise .it is clear that the probability to make a wrong decision with this distinguisher is smaller than .if we want for any fixed , have to be such that : note that this is really the right order ( up to some contant factor ) for the amount of equations which is needed ( the hoeffding bound captures well up to constant factors the probability of the error of the distinguisher in this case ) and using optimal bayesian decision does not allow to change up to multiplicative factors the number of equations that are needed for a fixed relative error weight .now assume that [ ass : polyeqpar ] if we can compute parity - check equations of weight in time , we are able to compute parity - check equations of this weight in time .this assumption holds for all `` reasonable '' randomized algorithms producing random parity - checks with uniform / quasi uniform probability as long as is at most some constant fraction ( with a constant ) of the total number of parity - check equations .now we set such that : clearly if we take now instead of the original parity - check equations just the parity check equations of weight the probability does of error does not get smaller than the bound that we had before since so , under assumption [ ass : polyeqpar ] if our distinguisher with several weights has enough parity - check equations available , we are able in polynomial time to compute parity - check equations of weight where is chosen such that ( [ eq : leadingweight ] ) holds and with these parity - check equations the distinguisher of subsection [ bias ] can work too .the complexity of statistical decoding without the phase of computation of the parity - check equations is the number of parity - check equations that it is needed .so , under assumption [ ass : polyeqpar ] , its complexity with our first distinguisher will be for each codelength the same up to a polynomial mutiplicative factor as the complexity with the second distinguisher .moreover , under assumption [ ass : polyeqpar ] the complexity of the computation of the parity - check equations that is needed for both distinguishers is the same up to a polynomial factor .as the are exponentially small in , in order to have a probability of success which tends to , the s of both distinguisher have to be of order .it leads to the conclusion that the asymptotic exponent of the statistical decoding is the same with considering some well chosen weight or several weights .we stress that this conclusion is about an asymptotic study of the complexity of statistical decoding . indeed , in practice algorithms [ alg : gauss ] and[ alg : fusion ] can output many parity - check equations of weight close to and . it will be counter - productive not to keep them and use them with the distinguisher we just described .as we are now able to give a formula for we come back to the algorithm + ` paritycheckcomputation` in order to estimate .there is an easy way of producing parity - check equations of moderate weight by gaussian elimination .this is given in algorithm [ alg : gauss ] that provides a method for finding parity - check equations of weight of an ] random binary linear code is .obviously if is too small there are not enough equations for statistical decoding to work , we namely need that the minimum such that this holds is clearly given by the minimal such that the following expression holds so gives the minimal relative weight such that asymptotically the number of parity - check equations needed for decoding is exactly the number of parity - check equations of weight in the code , where . below this weight, statistical decoding can not work ( at least not for random linear codes ) .in other words the asymptotic exponent of statistical decoding is always lower - bounded by .in the case of a relative error weight given by the gilbert - varshamov bound , theorem [ theobias ] leads to the conclusion that moreover for all relative weights greater than the number of parity - check equations that are needed is exactly the number of parity - check equations of this weight that exist in a random code .this result is rather intriguing and does not seem to have a simple interpretation .the relative minimal weight is in relationship with the first linear programming bound of mceliece - rodemich - rumsey - welch and can be interpreted through its relationship with the zeros of krawtchouk polynomials .this bound arises from the fact that from theorem [ theobias ] , we know that corresponds to the relative weight where we switch from the complex case to the real case , and this happens precisely when we leave the region of zeros of the krawtchouk polynomials .thanks to figure [ fig : limita ] which compares prange s isd , statistical decoding with parity - check equations of relative weight and with , we clearly see on the one hand that there is some room of improving upon naive statistical decoding based on parity - check equations of weight , but on the other hand that even with the best improvement upon statistical decoding we might hope for , we will still be above the most naive information set decoding algorithm , namely prange s algorithm . the goal of this subsection is to present an improvement to the computation of parity - check equations and to give its asymptotic complexity .r. overbeck in ( * ? ? ?4 ) showed how to compute parity - check equations thanks to stern s algorithm .we are going to use this algorithm too .however , whereas overbeck used many iterations of this algorithm to produce a few parity - check equations of small weight , we observe that this algorithm produces in a natural way during its execution a large number of parity - check equations of relative weight smaller than . we will analyze this process here and show that it yields an algorithm that gives equations in amortized time . to find parity - check equations , we described an algorithm which just performs gaussian elimination and selection of sufficiently sparse rows .in fact , it is the main idea of prange s algorithm .as we stressed in introduction , this algorithm has been improved rather significantly over the years ( isd family ) .our idea to improve the search for parity - check equations is to use precisely these improvements .the first significant improvement is due to stern and dumer .the main idea is to solve a sub - problem with the birthday paradox .we are going to describe this process and show how it allows to improve upon naive statistical decoding .we begin by choosing a random permutation matrix and putting the matrix into the systematic form : \1 .we solve csd( ) .\2 . for each solution , we output .we recall that solving csd( ) means to find columns of which yield . * soundness : * we have and therefore is a parity - check equation of . * number of solutions : * the number of solutions is given by the number of solutions of 1 .furthermore , the complexity of this algorithm is up to a polynomial factor given by the complexity of 1 .this algorithm may not provide in one step enough solutions . in this case, we have to put in another systematic form ( _ i.e. _ choose another permutation ) .the randomness of our algorithm will come from this choice of permutation matrix . * solutions weight : * in our model is supposed to be random .so we can assume the same hypothesis for .as the length of its rows is , we get asymptotically parity - check equations of weight : the first part of this algorithm can be viewed as the first part of isd algorithms .there is a general presentation of these algorithms in in section 3 .all the efforts that have been spent to improve prange s isd can be applied to solve the first point of our algorithm . to solve this point ,dumer suggested to put in the following form : and to build the lists : then we intersect these two lists with respect to the second coordinate and we keep the associated first coordinate . in other words , we get : this process is called a fusion . algorithm [ alg : fusion ] summarizes this formally .input : .output : /*_subset of */ /*_empty list_*/ / * _ hash table_*/ random permutation matrix we find non - singular such that we partition as where as we neglect polynomial factors , the complexity of algorithm 3 . is given by : indeed , we only have to enumerate the hash table construction ( first factor ) and the construction of . in order to estimate we use the following classical proposition :let be two lists where inputs are supposed to be random and distributed uniformly .then , the expectation of the cardinality of their intersection is given by : as we supposed random , we can apply this proposition to ` dumerfusion ` .therefore , _ ` dumerfusion ` _ s complexity is given by : and it provides on average solutions in order to study this algorithm asymptotically , we introduce the following notations and relative parameters : ; ; ; .we may observe that gives the number of parity - check equations that ` dumerfusion ` outputs in one iteration and is the running time of one iteration .there are many ways of choosing and .however in any case ( see subsection [ lim ] ) , as the weight of parity - check equations we get with ` dumerfusion ` is we have to choose and such that which is equivalent to the following lemma gives an asymptotic choice of and that allows to get parity - check equations in amortized time : if _ ` dumerfusion ` _ provides parity - check equations of relative weight in amortized time .moreover , with this constraint we have asymptotically : we remark that .our goal is to find such that asymptotically .the constraint follows from .we are now able to give the asymptotic complexity of statistical decoding with the use of ` dumerfusion ` strategy . with the constraints ( [ asym ] ) , ( [ amtime ] ) and for we have : to ( [ amtime ] ) and ( [ oneite ] ) we use subsection [ fram ] and we conclude that under theses constraints we have .we summarize the meaning of the constraints as : * with ( [ asym ] ) we are sure there exists enough parity - check equations for statistical decoding to work ; * with ( [ amtime ] ) ` dumerfusion ` gives parity - check equations in amortized time ; * with ( [ oneite ] ) ` dumerfusion ` provides always no more equations in one iteration than we need .in order to get the optimal statistical decoding complexity we minimize ( with given by theorem [ biassdecoding ] ) under constraints , and .the exponent of statistical decoding with this strategy is given in figure [ fig : limit ] .as we see , ` dumerfusion ` with our strategy allows statistical decoding to be optimal for rates close to .we can further improve ` dumerfusion ` with ideas of and , however this comes at the expense of having a much more involved analysis and would not allow to go beyond the barrier of the lower bound on the complexity of statistical decoding given in the previous subsection .nevertheless with the same strategy , these improvements lead to better rates with an optimal work of statistical decoding .in this article we have revisited statistical decoding with a rigorous study of its asymptotic complexity . we have shown that under assumption 1 and 2 this algorithm is regardless of any strategy we choose for producing the moderate weight parity - check equations needed by this algorithm always worse than prange isd for the hardest instance of decoding ( i.e. for a number of errors equal to gilbert varshamov bound ) . in this casea very intriguing phenomenon happens , we namely need for a large range of parity - check weights all the parity - check available in the code to be be able to decode with this technique .it seems very hard to come up with choices of rate , error weight and length for which statistical decoding might be able to compete with isd even if this can not be totally ruled out by the study we have made here .however there are clearly more sophisticated techniques which could be used to improve upon statistical decoding .for instance using other strategies by grouping positions together and using all parity - check equations involving bits in this group could be another possible interesting generalization of statistical decoding .anja becker , antoine joux , alexander may , and alexander meurer . decoding random binary linear codes in : how improves information set decoding . in _ advances in cryptology - eurocrypt 2012_ , lecture notes in comput .springer , 2012 .rodolfo canto - torres and nicolas sendrier .analysis of information set decoding for a sub - linear error weight . in _ post - quantum cryptography 2016_ , lecture notes in comput ., pages 144161 , fukuoka , japan , february 2016 .marc p. c. fossorier , kazukuni kobara , and hideki imai .modeling bit flipping decoding based on nonorthogonal check sums with application to iterative decoding attack of mceliece cryptosystem ., 53(1):402411 , 2007 .matthieu finiasz and nicolas sendrier .security bounds for the design of code - based cryptosystems . in m.matsui , editor , _ advances in cryptology - asiacrypt 2009 _ , volume 5912 of _ lecture notes in comput ._ , pages 88105 .springer , 2009 .abdulrahman al jabri . a statistical decoding algorithm for general linear block codes . in bahramhonary , editor , _ cryptography and coding .proceedings of the 8^th^ i m a international conference _ , volume 2260 of _ lecture notes in comput ._ , pages 18 , cirencester , uk , december 2001 .springer .alexander may , alexander meurer , and enrico thomae . decoding random linear codes in . in donghoon lee and xiaoyun wang , editors , _ advances in cryptology - asiacrypt 2011 _ , volume 7073 of _ lecture notes in comput ._ , pages 107124 .springer , 2011 .alexander may and ilya ozerov . on computing nearest neighbors with applications to decoding of binary linear codes .in e. oswald and m. fischlin , editors , _ advances in cryptology - eurocrypt 2015 _ , volume 9056 of _ lecture notes in comput ._ , pages 203228 .springer , 2015 .rafael misoczki , jean - pierre tillich , nicolas sendrier , and paulo s. l. m. barreto . : new mceliece variants from moderate density parity - check codes . in _ proc .symposium inf .theory - isit _ , pages 20692073 , 2013 .raphael overbeck .statistical decoding revisited . in reihanehsafavi - naini lynn batten , editor , _ information security and privacy : 11^th^ australasian conference , acisp 2006 _ , volume 4058 of _ lecture notes in comput ._ , pages 283294 .springer , 2006 .jacques stern . a method for finding codewords of small weight . in g.d. cohen and j. wolfmann , editors , _ coding theory and applications _ , volume 388 of _ lecture notes in comput ._ , pages 106113 .springer , 1988 .
the security of code - based cryptography relies primarily on the hardness of generic decoding with linear codes . the best generic decoding algorithms are all improvements of an old algorithm due to prange : they are known under the name of information set decoding techniques ( isd ) . a while ago a generic decoding algorithm which does not belong to this family was proposed : statistical decoding . it is a randomized algorithm that requires the computation of a large set of parity - check equations of moderate weight . we solve here several open problems related to this decoding algorithm . we give in particular the asymptotic complexity of this algorithm , give a rather efficient way of computing the parity - check equations needed for it inspired by isd techniques and give a lower bound on its complexity showing that when it comes to decoding on the gilbert - varshamov bound it can never be better than prange s algorithm .
coatings are commonly applied to the exterior of thin cylindrical wires or fibers to provide protection and/or enhance performance ( e.g. electrical wire and fiber - optic cable ) .methods of coating include extruding a fiber through a die ( die coating ) or drawing a fiber from a liquid bath ( dip coating ) . during the coating process, a uniform liquid film can become unstable to interfacial perturbations that may develop further into droplets .this effect , which detracts from the quality of a coating , has inspired a wide array of studies on the formation and motion of perturbations on cylindrical fibers .fibers can also be coated by a continuously - fed axisymmetric fluid flow down the length of a vertical fiber ( see figure [ f - anjet ] ) as has been examined in several analytical and experimental studies .the geometry of the unperturbed flow is an annular film with a fixed internal boundary and a free surface at the outer fluid - air interface .it is well known that the free surface of this annular film becomes unstable to interfacial perturbations , as shown in figure [ f - anjet ] .herein we present an experimental study on an annular viscous film with a particular focus on the initial formation and longer - time dynamics of perturbations along the film free surface . with gravity acting towards the right .perturbations develop along the free surface some distance down the fiber ; once formed , these perturbations continue to travel down the fiber .image length = 9.7 cm.,width=3 ] a related problem to annular films is on the motion of cylindrical jets , which in contrast have no fixed internal boundary .analytical studies on the motion and stability of inviscid and viscous jets dates back to the work of plateau , lord rayleigh , weber and chandrasekhar .it is known that capillary effects drive perturbation growth along the jet free surface , often referred to as the plateau - rayleigh instability .analytical results developed from temporal linear stability theory were tested in experiments by donnelly & glaberson , who found strong ( fair ) quantitative agreement between the theoretical and measured dispersion relation for an inviscid ( viscous ) jet .the dynamics of free surface perturbations along cylindrical jets and annular films are , however , quite different . in the cylindrical case jet breakup occurs when the perturbations become sufficiently large , while in the annular case the large amplitude perturbations remain connected by a liquid film .recently , several theoretical studies have analyzed the temporal linear stability of an annular viscous film flowing down a vertical fiber in the stokes and moderate reynolds number limits ; the base flow in these studies is assumed to be a steady , unidirectional parallel flow .here we test these results by determining whether : ( i ) the base flow used in matches the experimental flow ; and ( ii ) the dispersion relations derived in the stokes and moderate reynolds number limits correctly predict the nascent growth of perturbations measured in low and moderate reynolds number flows . as free surface perturbations travel down a vertical fiber , many interesting phenomena occur . in experiments , kliakhandler ,davis and bankoff ( kdb ) observed three types of behavior far down the length of the fiber ( 2 m ) . at the highest flow rate ( regime a ) , the film between perturbations is thick and uniform , and faster moving perturbations collide into slower moving perturbations ( unsteady behavior ) . at an intermediate flow rate ( regime b ) , the spacing , size and speed of the perturbations is constant so that no collisions occur ( steady behavior ) .and at the lowest flow rate ( regime c ) , the fluid periodically drips from the tank , rather than jets as with the higher flow rates , creating a regular spacing between perturbations near the tank outlet .the long time between drips allows the film connecting consecutive perturbations to thin and subsequently become unstable to smaller capillary perturbations .( figure 1 in illustrates these three regimes of behavior . )simulations of a stokes flow model developed by kdb qualitatively captured two of the three observed behaviors ( regimes b and c ) , while the behavior associated with the highest flow rate ( regime a ) could not be replicated .craster & matar ( cm ) select a different scaling than kdb to derive an evolution equation for the free surface . using traveling wave solutions ,their stokes flow model quantitatively predicted the perturbation speed and height of regime a measured by kdb .the model also qualitatively captured regime c , though the steady pattern of perturbation spacing found in regime b could not be matched with traveling wave solutions . in experiments , cm observed regime b near the tank outlet , however , they found the regularly spaced pattern of perturbations disassembled itself further down the fiber . from this observation , cm concluded regime b is a transient rather than steady regime .the contradiction in observations of regime b ( steady behavior ) by kdb and cm motivated us to look more closely at the steady and unsteady states by examining the dynamics of the perturbations where they initially form along the fiber . in experiments using fluids with different densities , surface tensions and viscositieswe observe regimes a ( unsteady ) , b ( steady ) and c ( dripping ) as described by kdb , though the focus of this paper is on regimes a and b. in our experiments , we observe the flow transitions abruptly from unsteady to steady behavior at a critical flow rate ( the value of is dependent on the particular fluid ) . in a recent independent study , duprat _ et al . _ explain the transition from regime a ( unsteady behavior ) to regime b ( steady behavior ) as a transition from a convective to absolute instability . in their experiments with silicone oil using a range of fiber and orifice radii, they find the transition occurs only at intermediate film thicknesses and for sufficiently small fiber radii ; at thin or thick film thickness , they find the perturbation behavior remains convective ( unsteady ) .these criteria may explain why cm did not observe the steady dynamics of regime b in their experiments with silicone oil .here we find the transition from unsteady to steady behavior is also correlated to the rate at which perturbations naturally form along the fiber . for ( steady case ) , the rate of perturbation formation is constant . as a resultthe position along the fiber where perturbations form is nearly fixed , and the spacing between consecutive perturbations remains constant as they travel 2 m down the fiber . for ( unsteady case ) , the rate of perturbation formation is modulated . as a resultthe position along the fiber where perturbations form oscillates irregularly , and the initial speed and spacing between perturbations varies resulting in the coalescence of neighboring perturbations further down the fiber . the paper is organized as follows . the experimental setup and properties of the unperturbed floware presented in section [ sec - exper ] .measurements of the perturbation growth are compared to analytical predictions for stokes and moderate reynolds number conditions in section [ sec - perform ] .the perturbation behavior exhibited in regimes a ( unsteady ) and b ( steady ) near the tank outlet is closely examined in section [ sec - stunst ] .conclusions are provided in section [ sec - conc ] .( not to scale ) . the unperturbed film radius measured from the fiber centerline is unperturbed film thickness , perturbation amplitude and perturbation wavelength ., width=2 ] the experimental setup consists of viscous fluids , a reservoir tank / orifice assembly , nylon fishing line , a high - speed digital imaging camera , illumination , a computer and edge - detection software ( a schematic of the experiment is shown in figure [ f - jetsch](a ) ) .the reservoir tank ( 6 l capacity ) is graduated at 100 ml increments to measure flow rate .an orifice , machined with a flat edge ( inner radius = 0.11 cm , outer radius = 0.16 cm , length = 2.6 cm ) , is attached to the bottom of the tank to ensure a reproducible solid / fluid / air contact line in the experiment .a nylon fiber ( cm ) anchored from above , passes through the center of the tank / orifice assembly , and is held vertically plumb with weights attached 2 m below the orifice .the fluid , which is gravitationally forced from the tank , coats the fiber to create an annular film . to reduce air currents and other noise during data collection ,the entire apparatus is enclosed by an aluminum frame with plastic sheet sidewalls and top .the motion of the annular film is recorded using a high - speed digital imaging camera ( phantom v4.2 ) at rates between 1000 - 4000 frames / s and an image size of 64 x 512 pixels with the camera focused on approximately the upper 10 cm of the fiber .illumination is obtained using silhouette photography following , with a 250 w lamp , an experimental grade one - way transparent mirror ( edmund scientific , a40,047 ) and high contrast reflective screen material ( scotchlite 3 m 7615 ) .movies of the annular film are recorded and downloaded to a computer using camera software .the free surface of the film is determined from movie images using an edge - detection algorithm .the algorithm locates the free surface by interpolating the maxima positions in a gradient image ; the gradient image is produced using the frei - chen operator .the algorithm can detect the edge of the film free surface to within approximately 1/10 of a pixel which for the screen resolution in our experiments corresponds to cm .the experimental fluids consist of castor oil , vegetable oil ( crisco ) and an 80:20 glycerol / water solution ( by weight ) .the temperature , density ( ) , surface tension ( ) , dynamic viscosity ( ) of the fluids , and the framing rate and screen resolution used in the experiments are listed in table [ tab-1 ] .the surface tension was measured at room temperature using a fisher 21 tensiomat and viscosity was measured using a temperature controlled cone and plate rheometer ( brookfield , model dv - iii+ ) .the fluid temperature varied by less than 4.6% , 1.9% and 1.4% in the castor oil , vegetable oil and glycerol solution experimental runs , respectively .we note that the selection of fluids allows us to independently probe the influence of surface tension or viscosity on flow behavior since castor oil and vegetable oil have comparable surface tension while vegetable oil and the glycerol solution have comparable viscosity ..[tab-1]fluid properties and experimental conditions . [ cols="^,^,^,^,^,^,^ " , ] experimental and theoretical studies often assume the base flow of an annular film is steady , unidirectional and parallel . under these conditions ,the unperturbed flow with free surface located at and constant pressure field is described by the boundary value problem for the axial velocity where the boundary conditions include no - slip at the fiber and zero tangential stress at the free surface .( [ e - bvp ] ) can be solved exactly for the axial velocity to obtain .\label{e - uniflow}\ ] ] using ( [ e - uniflow ] ) , the flow rate of an annular film can be expressed in terms of and by comparing ( [ e - qtheory ] ) to experimental data , we can test the assumption that the base flow of an annular film is a steady , unidirectional parallel flow .figure [ f - qvv](b ) shows a comparison of the flow rate as a function of the unperturbed film radius for vegetable oil , glycerol solution and castor oil measured directly in experiments ( symbols ) and using ( [ e - qtheory ] ) with cm ( dotted line ) .we find excellent agreement between ( [ e - qtheory ] ) and the experimental data in the castor oil ( circle ) and vegetable oil ( square ) experiments which indicates the flow in these experiments is well approximated by a unidirectional parallel flow . the theory overestimates by as much as 12% in the glycerol solution experiment ( triangle ) .this is not surprising since the assumptions on the flow are equivalent to steady stokes flow .with we suspect the stokes flow assumption is not valid for the glycerol solution experiment .based on the comparison in figure [ f - qvv](b ) , we estimate ( [ e - uniflow ] ) and ( [ e - qtheory ] ) are valid when the experiments conducted by kdb and cm meet this criterion , thus we conclude their assumption that the flow is steady , unidirectional and parallel is indeed valid . in the proceeding sections ,we examine the dynamics of a perturbed annular film including the initial formation and longer - time dynamics of interfacial perturbations along the free surface .the image in figure [ f - anjet ] illustrates the capillary instability an annular viscous film undergoes as the unperturbed free surface becomes unstable to undulations that develop into large amplitude perturbations .next we present experimental observations on the growth of these interfacial perturbations and compare their initial growth to theoretical predictions developed from linear stability analysis . before proceeding ,we first recount relevant stability results developed in the stokes and moderate reynolds number flow limits .craster & matar derive a long - wave stokes flow evolution equation for the free surface of the annular film , under the assumption that the unperturbed film radius is small relative to the capillary length ( i.e. , ) and the reynolds number is sufficiently small ( ) , to obtain \times ( \alpha^4 -4 \hat{s}^2 \alpha^2 + 3\hat{s}^4 -4\hat{s}^4 \ln(\hat{s}/\alpha))\right ] = 0,\label{e - cras}\end{aligned}\ ] ] where are dimensionless variables satisfying the scalings and conducting a linear stability analysis by perturbing about the base flow where is the ( real ) dimensionless wavenumber and is the ( complex ) dimensionless growth rate cm obtain the following dispersion relation for the growth rate in an analytical study , trifonov derived model equations for fluid flowing down the inside or outside of a vertical cylinder at moderate reynolds number ; the model includes evolution equations for the film thickness , and volumetric flow rate , . in a recent study , sisoev _ at al . _ rescale trifonov s equations for flow down the outside of a vertical cylinder , casting the model in terms of a generalized falling film model ( see eqs .( 11)-(13 ) in ) . conducting a linear stability analysis of the rescaled equations by perturbing about the base solution where is the ( real ) dimensionless wavenumber and is the ( complex ) dimensionless growth rate , sisoev _ et al . _ obtain a dispersion relation for satisfying where and the constant coefficients are defined in the appendix .the variables are dimensionless quantities satisfying the scalings where represent a stretching parameter and characteristic velocity scale , respectively .we note that the long - wave model used by sisoev __ was derived under the assumption that the stability results developed by craster & matar and sisoev _ at al ._ are temporal analysis ( since and ) , and thus model the case in which interfacial perturbations grow in amplitude everywhere along the film .our interest is in testing these stability predictions by comparing the theoretical dispersion relations ( [ e - stdisp ] ) and ( [ e - redisp ] ) to the growth rate of perturbations measured in experiments conducted in the stokes and moderate reynolds number flow limits .figure [ f - cascoll ] shows a series of images tracking the formation of a perturbation along an annular film of castor oil . in frame( b ) , a small amplitude perturbation first appears along the film approximately 5.4 cm from the orifice ( circled region ) .frames ( c)-(j ) track the position of this perturbation as it grows in amplitude and saturates in size .once formed , the perturbation continues moving down the fiber ( not shown ) .since the perturbation grows in amplitude as it travels down the fiber , the flow is spatially ( convectively ) unstable rather than temporally ( absolutely ) unstable to perturbations . ) and ( b ) wavelength ( ) of a nascent perturbation as a function of time for castor oil ( /s ) ; experimental data corresponds to the perturbation tracked in figure [ f - cascoll ] .( a ) the initial amplitude growth of the perturbation is exponential ( shown in inset on log - linear scale ) followed by nonlinear saturation .dotted line : fit of data to corresponding to growth between frames ( b)-(g ) in figure [ f - cascoll ] .( b ) the wavelength decreases during the time interval that the amplitude grows exponentially and then saturates in length as the amplitude saturates in size.,title="fig:",width=3 ] ) and ( b ) wavelength ( ) of a nascent perturbation as a function of time for castor oil ( /s ) ; experimental data corresponds to the perturbation tracked in figure [ f - cascoll ] .( a ) the initial amplitude growth of the perturbation is exponential ( shown in inset on log - linear scale ) followed by nonlinear saturation .dotted line : fit of data to corresponding to growth between frames ( b)-(g ) in figure [ f - cascoll ] .( b ) the wavelength decreases during the time interval that the amplitude grows exponentially and then saturates in length as the amplitude saturates in size.,title="fig:",width=3 ] to characterize the growth of a perturbation we measure the amplitude ( half the radial distance from first minima to first maxima ) and the wavelength ( the axial distance from first to second maxima ) as shown in figure [ f - jetsch](b ) using edge - detection software ; both measurements are made in the moving reference frame of the perturbation .the data shown in figure [ f - gr ] corresponds to the perturbation followed in figure [ f - cascoll ] .figure [ f - gr](a ) shows the nascent growth of the amplitude is exponential ( inset ) followed by a slower phase as the perturbation saturates in size ( cm ) .the growth rate for the initial formation of the amplitude is determined from a least squares fit of the data to an exponential function yielding the dimensional growth rate s ( fit indicated by dotted line in figure [ f - gr](a ) ) .the wavelength of this perturbation decreases from cm to 0.80 cm during the time interval that the amplitude grows exponentially , before saturating in length to cm , as shown in figure [ f - gr](b ) .the decrease in during the exponential phase of growth indicates the annular film is unstable to a range of wavenumbers ( rather than to one fixed value .the behavior displayed in figure [ f - gr ] for the amplitude and wavelength is typical of observations made in the castor oil , vegetable oil and glycerol solution experiments .comparison of perturbation growth to the stokes ( [ e - stdisp ] ) or moderate reynolds number ( [ e - redisp ] ) dispersion relations depend on the flow conditions in each experiment , specifically on the reynolds and bond numbers ( provided in table [ tab-2 ] ) . in the castor oil experiments , and thus satisfying the requirements of the stokes model ( ) . since in the glycerol solution experiments ,inertial effects can not be ignored , and so we compare this case to the moderate reynolds number model . with and vegetable oil experiments are on the border of the requirements for the stokes model . in this casewe compare the experimental data to both the stokes and moderate reynolds number dispersion relations ./s ) ; ( b ) : q = 0.416 - orange , 0.383 - black , 0.342 - cyan , 0.252 - red , 0.211 - green and 0.170 - blue ( /s ) .vertical bars represent the standard deviation of and horizontal bars represent the range of measured during the period of exponential growth over all perturbations measured .corresponding colored curves represent theoretical prediction given by the stokes flow dispersion relation ( [ e - stdisp ] ) of plotted over the range of measured in experiments .,title="fig:",width=3 ] /s ) ; ( b ) : q = 0.416 - orange , 0.383 - black , 0.342 - cyan , 0.252 - red , 0.211 - green and 0.170 - blue ( /s ) .vertical bars represent the standard deviation of and horizontal bars represent the range of measured during the period of exponential growth over all perturbations measured .corresponding colored curves represent theoretical prediction given by the stokes flow dispersion relation ( [ e - stdisp ] ) of plotted over the range of measured in experiments .,title="fig:",width=3 ] figure [ f - cmdisp ] shows a comparison of the measured amplitude growth rate to the dispersion relation developed by craster & matar in the stokes flow limit ( [ e - stdisp ] ) for ( a ) castor oil and ( b ) vegetable oil at various flow rates . at a given flow rate , the growth rate for several perturbations was measured ( 8 - 12 perturbations for castor oil , and 15 - 44 for vegetable oil ) .the average dimensionless growth rate ( ) and dimensionless wavenumber ( ) measured over all the perturbations is denoted by a circle with each color corresponding to a different flow rate .the vertical bars represent the standard deviation of all the growth rates measured at a given flow rate during the period of exponential growth .since the perturbation wavelength decreases over a range of values during the exponential phase of growth , we can not assign a single wavenumber to its growth . instead, the horizontal bars represent the range of wavenumber measured during the period of exponential growth of all the perturbations .the corresponding colored curves represent the real part of the growth rate predicted by ( [ e - stdisp ] ) plotted over the range of wavenumber measured at each flow rate .we consider the theory to be in agreement with the experimental data ( at a given flow rate ) if the theoretical curve overlaps the rectangular region defined by the resolution bars of the data .figure [ f - cmdisp ] shows that the stokes theory is in agreement with four of the six castor oil experiments and with five of the six vegetable oil experiments . in the other three experiments , the theory overestimates the measured values by 10 to 13% .the quantitative agreement between theory and experimental data is excellent , a significant result considering : ( i ) the comparison is between a temporal stability theory and a spatial instability of the film , and ( ii ) the value of the reynolds number in the vegetable oil experiments , is slightly higher than the criteria for the stokes theory , /s ) ; ( b ) : q = 0.538 - orange , 0.493 - black , 0.437 - red , 0.381 - green and 0.325 - blue ( /s ) .vertical bars represent the standard deviation of and horizontal bars represent the range of measured during the period of exponential growth over all perturbations measured .corresponding colored curves represent theoretical prediction given by the moderate reynolds number flow dispersion relation ( [ e - redisp ] ) of plotted over the range of measured in experiments .,title="fig:",width=3 ] /s ) ; ( b ) : q = 0.538 - orange , 0.493 - black , 0.437 - red , 0.381 - green and 0.325 - blue ( /s ) .vertical bars represent the standard deviation of and horizontal bars represent the range of measured during the period of exponential growth over all perturbations measured .corresponding colored curves represent theoretical prediction given by the moderate reynolds number flow dispersion relation ( [ e - redisp ] ) of plotted over the range of measured in experiments .,title="fig:",width=3 ] figure [ f - sdisp ] shows a comparison of the measured amplitude growth rate to the dispersion relation developed by sisoev __ in the moderate reynolds number limit ( [ e - redisp ] ) for ( a ) vegetable oil and ( b ) glycerol solution at various flow rates .the data and theory are presented in a similar fashion to figure [ f - cmdisp ] with the exception that the dimensionless growth rate and wavenumber are given by and , and the growth rates for the glycerol solution experiments are averaged over 88 to 102 perturbations .we recall that the moderate reynolds number model is valid as long as ; in all of the experiments shown in figure [ f - sdisp ] , we find the moderate reynolds number model overestimates the measured growth rates by 28 to 50% in the vegetable oil experiments and 15 to 48% in the glycerol solution experiments , as shown in figure [ f - sdisp ] .clearly , the stokes model is more accurate at predicting the growth rate of the perturbations in the vegetable oil experiments than the moderate reynolds number model .this is somewhat surprising since the vegetable oil experiments slightly exceed the reynolds number limit of the stokes model , but satisfy the assumption on for the moderate reynolds number model .while the theoretical growth rates in figure [ f - sdisp ] are on the same order of magnitude as the measured values , the quantitative match between theory and data is not strong .we note that the range of the measured amplitude growth rate in experiments ( indicated by the vertical bars in figures [ f - cmdisp ] & [ f - sdisp ] ) varies by fluid and flow rate . for castor oil( figure [ f - cmdisp](a ) ) , the range is fairly small which we attribute to the low reynolds number ( ) in the experiments . for the experiments with vegetable oil ( figures [ f - cmdisp](b ) and [ f - sdisp](a ) ) and glycerol solution ( figure [ f - sdisp](b ) ) with /s, the range of the measured amplitude growth rates is large .naively , one could attribute this to the higher reynolds number in these experiments ( ) .this is , however , not the complete picture .notice the range is significantly smaller for the glycerol solution experiment at /s ( blue ( rightmost ) data set in figure [ f - sdisp](b ) ) .the difference in this data set compared to the other glycerol solution and vegetable oil sets is in the behavior of the perturbations .the perturbation behavior in the sets with a large range of amplitude growth rates is unsteady , while the behavior in the blue glycerol solution data set is steady .( the notion of unsteady and steady perturbation behavior will be explained in detail in section [ sec - stunst ] . )therefore , we find the range of amplitude growth rate of the perturbations is correlated to both the reynolds number of the flow and the longer - time dynamics of the perturbations .next , we examine the dynamics of perturbations after their initial formation and explain a physical mechanism that controls a known transition in the flow from unsteady to steady perturbation behavior .the dynamics of interfacial perturbations along an annular film flowing down a vertical fiber can be broken down into three essential stages : ( i ) initial exponential growth of the perturbation amplitude accompanied by a decrease in wavelength ; ( ii ) nonlinear saturation of the perturbation amplitude and wavelength ; and ( iii ) longer - time behavior in which the perturbation wavelength may ( unsteady - see figure [ f - anjet ] ) or may not ( steady ) vary along the fiber ; this last stage has been noted in other experimental studies .here we explain a physical mechanism that controls this third stage of dynamics ./s , time between images is 0.0125 s , elapsed time = 0.1 s , image height = 9.7 cm.,width=2 ] in experiments with all three fluids , we observe the perturbation motion abruptly transitions from unsteady ( regime a ) to steady ( regime b ) behavior at a critical flow rate , ( 0.0095 /s for castor oil , 0.119 /s for vegetable oil and 0.345 /s for glycerol solution ) , similar to observations made by duprat __ in their experiments with silicone oil .following kdb , we define the flow to be steady if no perturbations coalesce as they travel down the full length of the fiber ( 2 m ) , and unsteady otherwise while the flow is jetting from the orifice .an example of unsteady behavior in which two perturbations coalesce is shown in figure [ f - coal ] . note: we will not be examining the dripping state , regime c , which occurs at a lower flow rate , .we find the transition from unsteady ( ) to steady ( ) behavior is robust in the sense that once an experiment transitions to steady behavior ( as the flow rate decreases ) it does not revert back to the unsteady state ./s , elapsed time = 8.09 s , image height = 8.22 cm .the top of the image is 0.58 cm below the orifice.,width=3 ] the space - time plots in figure [ f - streak ] illustrate ( a ) unsteady and ( b ) steady perturbation behavior for experiments with glycerol solution . each plot is focused on 8.22 cm of the film , with the top of each plot located 0.58 cm below the orifice ; the time span of each plot is 8.09 s. the plots are created by mapping the radius of the free surface of the film , to a gray level with lighter ( darker ) gray level corresponding to thicker ( thinner ) regions of the free surface .the light characteristic lines in the plots indicate the location of perturbations as they move down the fiber , and their slope represent the speed of the perturbations .two features in the space - time plots distinguish the unsteady and steady perturbation behavior .first , the location along the fiber that perturbations form ( which we refer to as the boundary ) oscillates irregularly in the unsteady case and appears nearly fixed in the steady case .second , in the unsteady case perturbations coalesce as faster moving perturbations collide into slower moving perturbations ( indicated by intersecting characteristic lines ) , whereas in the steady case perturbations do not coalesce as they travel with the same terminal speed down the fiber ( indicated by parallel characteristic lines ) .the longer - time motion of the perturbations appears to be correlated to the motion of the boundary .notice in figure [ f - streak](a ) that large spatial variations in the boundary modulate the perturbation speed ( i.e. , the slope of the characteristic lines ) which results in coalescence events later down the fiber . in the steady case, there is no spatial variation in the boundary , and as a result , the perturbations remain equally spaced as they travel with constant terminal speed down the full length of the fiber ( not shown in figure [ f - streak](b ) ) .our observations of the steady case ( regime b ) are consistent with those of kdb . given the robustness of the steady dynamics in all of our experiments , we conclude this is not a transient state as cm report .finally , we note that when the flow is unsteady , the oscillation frequency of the boundary increases with increasing flow rate ; for example , compare the boundary frequency in figures [ f - streak](a ) and [ f - boundosc ] . ) where perturbations initially form along the fiber ( i.e. , the location of the boundary ) as a function of time , corresponding to the data shown in figure [ f - streak](a ) for the unsteady case at /s ( ) and in figure [ f - streak](b ) for the steady case at /s ( + ) .( b ) average distance ( ) from the orifice that perturbations form as a function of flow rate for experiments with glycerol solution .vertical bars represent the standard deviation of over all the perturbations measured at a given flow rate .the dotted vertical line denotes the transition flow rate separating steady and unsteady perturbation behavior.,title="fig:",width=3 ] ) where perturbations initially form along the fiber ( i.e. , the location of the boundary ) as a function of time , corresponding to the data shown in figure [ f - streak](a ) for the unsteady case at /s ( ) and in figure [ f - streak](b ) for the steady case at /s ( + ) .( b ) average distance ( ) from the orifice that perturbations form as a function of flow rate for experiments with glycerol solution .vertical bars represent the standard deviation of over all the perturbations measured at a given flow rate .the dotted vertical line denotes the transition flow rate separating steady and unsteady perturbation behavior.,title="fig:",width=3 ] that perturbations form as a function of time .plots correspond to the data shown in figure [ f - dvt](a ) in : ( a ) steady behavior ( ) at /s and ( b ) unsteady behavior ( ) at /s for glycerol solution . in the steady case , the fundamental frequency = 14.45 hz , is the first harmonic in the power spectrum . in the unsteady case ,the bandwidth supporting the fundamental peak is larger than the steady case.,title="fig:",width=3 ] that perturbations form as a function of time .plots correspond to the data shown in figure [ f - dvt](a ) in : ( a ) steady behavior ( ) at /s and ( b ) unsteady behavior ( ) at /s for glycerol solution . in the steady case , the fundamental frequency = 14.45 hz , is the first harmonic in the power spectrum . in the unsteady case ,the bandwidth supporting the fundamental peak is larger than the steady case.,title="fig:",width=3 ] versus data in experiments with glycerol solution .the dotted vertical line denotes the transition flow rate separating steady and unsteady perturbation behavior .( b ) the ( normalized ) cumulative integrated power measured within the support of the first peak between f hz and f hz for the spectra shown in figure [ f - spectra](b ) .the interquartile region ( iqr ) is the frequency bandwidth bounding the middle 50% of the cumulative integrated power.,title="fig:",width=3 ] versus data in experiments with glycerol solution .the dotted vertical line denotes the transition flow rate separating steady and unsteady perturbation behavior .( b ) the ( normalized ) cumulative integrated power measured within the support of the first peak between f hz and f hz for the spectra shown in figure [ f - spectra](b ) .the interquartile region ( iqr ) is the frequency bandwidth bounding the middle 50% of the cumulative integrated power.,title="fig:",width=3 ] .,width=3 ] to characterize the motion of the boundary , we measure the distance from the orifice that each perturbation initially forms along the fiber ( at a fixed flow rate ) using edge - detection software ; a perturbation is detected when its amplitude ( ) initially exceeds 1/10 of a pixel or 0.002 cm .the data shown in figure [ f - dvt](a ) corresponds to the space - time plots of the unsteady ( ) and steady ( ) experiments shown in figure [ f - streak ] .figure [ f - dvt](b ) is a plot of the average distance from the orifice that perturbations form ( ) as a function of flow rate in experiments with glycerol solution .the vertical bars represent the standard deviation of over all the perturbations measured at a fixed flow rate and the dotted vertical line represents the transition flow rate , the distance that perturbations form from the orifice increases monotonically with increasing flow rate . in the steady case , at a given flow rate the distance is nearly constant , whereas in the unsteady case , the range of distance that perturbations form increases with increasing flow rate .these results are consistent with experimental observations made by duprat __ . to understand the physical mechanism controlling the steady and unsteady states we examine the power spectra of in the glycerol solution experiments .figures [ f - spectra](a ) and ( b ) represent the power spectra for the steady ( ) and unsteady ( ) perturbation behavior shown in figure [ f - dvt](a ) . in the steady case the fundamental frequency ( ) , which is the first harmonic of the power spectra , represents the rate at which perturbations form along the fiber ( e.g. , perturbations / s in the experiment shown in figure [ f - streak](b ) ) ; in the unsteady case the fundamental peak is much broader so that is less well defined . as a function of increasing flow rate , the fundamental frequency ( i.e. , rate of perturbation formation ) increases linearly when the perturbation dynamics are steady ( ) , and is scattered about perturbations / s when the dynamics are unsteady ( ) ( see figure [ f - ff](a ) ) .another feature in the power spectra distinguishing steady and unsteady behavior is in the frequency bandwidth supporting the fundamental peak ; the bandwidth of the unsteady spectra is larger than the steady spectra in figure [ f - spectra ] .we characterize the bandwidth of the fundamental peak by measuring the interquartile region ( iqr ) .the iqr is defined as the frequency bandwidth bounding the middle 50% of the ( normalized ) cumulative integrated power under the fundamental peak ; an example is shown in figure [ f - ff](b ) corresponding to the power spectra in figure [ f - spectra](b ) where and are lower and upper frequency bounds supporting the fundamental peak ( see figure [ f - spectra](b ) ) , and is the power at frequency .the iqr , or bandwidth , measures the modulation of the fundamental frequency , or more physically , the modulation of the rate at which perturbations form along the fiber .a jump in the bandwidth occurs at the transition flow rate in the glycerol solution experiments , as shown in figure [ f - iqr ] . for ,the bandwidth is nearly zero , thus the rate of perturbation formation is nearly constant resulting in longer - time steady perturbation behavior .for the bandwidth is sizable and increases with increasing flow rate , thus there is a significant modulation of the rate at which perturbations form .it is this large modulation that results in the longer - time unsteady dynamics of the perturbations .while the transition in figure [ f - iqr ] is striking , it is not entirely clear whether it is a subcritical or supercritical transition , and if subcritical , whether the transition is hysteretic .in an experimental study , we examine the motion of an annular viscous film flowing under the influence of gravity down the outside of a vertical fiber . we find the unperturbed flow is well approximated by a steady , unidirectional parallel flow when .the dynamics of the perturbed flow can be divided into three stages : ( i ) initial exponential growth of the perturbation amplitude accompanied by a decrease in the perturbation wavelength ; ( ii ) nonlinear saturation of the perturbation amplitude and wavelength ; and ( iii ) longer - time behavior in which the perturbation wavelength may ( unsteady ) or may not ( steady ) vary along the film . during the first stage ,we find linear stability theory results developed from a long - wave stokes flow model are in excellent agreement with the initial growth of perturbations measured in experiments .the agreement between linear stability results developed from a moderate reynolds number model and experimental data are not as strong as in the stokes flow case . a close examination of the longer - time steady and unsteady behavior of interfacial perturbationsis shown to be correlated to the range of : ( i ) the rate of exponential growth of the perturbation amplitude ; and ( ii ) the location along the fiber where perturbations initially form .in particular , we find the rate of growth of the amplitude and the location along the fiber where perturbations form is nearly constant for the steady case , and varies over a range of values in the unsteady case .furthermore , we find the transition in the longer - time perturbation dynamics from unsteady to steady behavior at a critical flow rate occurs because of a transition in the rate at which perturbations naturally form along the free surface of the film . in the steady case ,the rate of perturbation formation is nearly constant , resulting in the perturbations remaining equally spaced as they travel with the same terminal speed down the fiber . in the unsteady case ,the rate of perturbation formation is modulated which results in the modulation of the initial speed and spacing between perturbations and ultimately leads to the coalescence of perturbations further down the fiber .it is not clear whether this transition is subcritical or supercritical , and if subcritical , whether the transition is hysteretic .we would like to thank a. belmonte , m. g. forest , m. frey , d. henderson , h. segur , h. stone and t. witelski for many helpful discussions and timothy baker for his aid in building the experimental apparatus .this research was supported by a national science foundation reu grant ( phy-0097424 & phy-0552790 ) .
it is known that the free surface of an axisymmetric viscous film flowing down the outside of a thin vertical fiber under the influence of gravity becomes unstable to interfacial perturbations . we present an experimental study using fluids with different densities , surface tensions and viscosities to investigate the growth and dynamics of these interfacial perturbations and to test the assumptions made by previous authors . we find the initial perturbation growth is exponential followed by a slower phase as the amplitude and wavelength saturate in size . measurements of the perturbation growth for experiments conducted at low and moderate reynolds numbers are compared to theoretical predictions developed from linear stability theory . excellent agreement is found between predictions from a long - wave stokes flow model ( craster & matar , j. fluid mech . * 553 * , 85 ( 2006 ) ) and data , while fair agreement is found between predictions from a moderate reynolds number model ( sisoev _ et al . _ , chem . eng . sci . * 61 * , 7279 ( 2006 ) ) and data . furthermore , we find that a known transition in the longer - time perturbation dynamics from unsteady to steady behavior at a critical flow rate , is correlated to a transition in the rate at which perturbations naturally form along the fiber . for ( steady case ) , the rate of perturbation formation is constant . as a result the position along the fiber where perturbations form is nearly fixed , and the spacing between consecutive perturbations remains constant as they travel 2 m down the fiber . for ( unsteady case ) , the rate of perturbation formation is modulated . as a result the position along the fiber where perturbations form oscillates irregularly , and the initial speed and spacing between perturbations varies resulting in the coalescence of neighboring perturbations further down the fiber .
the earth is a tectonically active planet .large scale thermal convection , which is related to the motion of tectonic plates on the earth s surface , is taking place in the earth s crust and mantle ( which collectively extend from the core - mantle boundary , km , to the earth s surface , km , where is the radius ) .the crust and mantle consist primarily of silicate rocks .the outermost layer is the crust , which has a thickness ranging from about 80 km under tibet to about 5 km in beneath oceans . as discussed below, the physical properties of the crust are highly laterally heterogeneous .the mantle is divided into the upper mantle , which extends from the base of the crust to a depth of about 410 km ( km ) ; the transition zone , in the depth range 410 km to 660 km ( km to km ) ; and the lower mantle , in the depth range 660 km to 2891 km ( km to km ) .the boundaries between the upper mantle and the transition zone , and between the transition zone and the lower mantle , are thought to be due to phase transitions in silicate minerals .the earth s core consists primarily of iron and thus has a considerably greater density than the mantle .the outer core ( km to km ) is liquid ; magnetohydrodynamic convection in the outer core is considered to be the cause of the earth s magnetic field .the inner core , which extends from the base of the outer core to the earth s center ( to km ) , is solid . for further general information on the structure of the earth s interior see recent textbooks ( e.g. , lay and wallace , 1995 ; shearer , 1999 ) and the works cited therein . due to the increase of pressure with depth ,the earth s density and elastic constants are vertically heterogeneous . however , because the earth is tectonically active , its physical properties are also laterally heterogeneous .let us denote the laterally averaged one - dimensional ( 1-d ) density structure by , where is the density in units of gm/ ( or kg / m ) , and denote the three dimensional ( 3-d ) density distribution by , where and are respectively the colatitude and longitude , in spherical polar coordinates . the earth s average density can be determined from its total mass , kg , and its outer radius .if the earth were a homogeneous sphere , its moment of inertia would be , where is the earth s outer radius .however , the observed moment of inertia has a much smaller value , approximately .this confirms that the earth s inner regions ( i.e.the outer and inner core ) are significantly denser than average .even if the earth s total mass and moment of inertia are combined with other geodetic data such as the spherical harmonic expansion of the earth s external gravity field ( which is inferred from satellite data ) , these data provide integral constraints on the earth s density distribution but are insufficient to determine it uniquely .it therefore is necessary to use seismological data as the primary basis for inferring the earth s density distribution .however , for technical reasons that are not discussed in detail here , inferring the earth s density distribution directly from observed seismological data is not practically realizable ( bullen , 1975 ; kennett , 1998 ) .it thus is necessary to follow a two step inference process .first the spatial distribution of seismic wave velocities is inferred from seismological data ; second , the density distribution is inferred from the seismic velocities , using the above integral constraints together with other empirical relations .both of these steps introduce uncertainty into the density model . for the purposes of very long baseline neutrino experiments, isotropic earth models can probably be regarded as sufficiently accurate ; the discussion in this paper is limited to such models .the most general anisotropic elastic solid has 21 independent elastic constants , but an isotropic elastic solid has only two independent elastic constants , the lam constants and . in an isotropic elastic body the velocity of compressional elastic waves ( p - waves ) , , and the velocity of transverse elastic waves ( s - waves ) , , are given respectively by as a rough approximation , the ratio of p- and s - wave velocities in the solid earth is given by but the exact value of the proportionality constant varies with the chemical composition , pressure and temperature .inversion of observed seismic data for earth structure is an underdetermined inverse problem , and all earth models are subject to error and uncertainty .regularization constraints of some type ( e.g. , smoothness , minimum variation from the starting model , etc . ) must be applied to obtain a stable solution .inverse theory allows formal error estimates to be made , but it is well known that systematic errors , which can not be quantitatively estimated , may often be on the same order or larger .systematic errors are due to factors such as the uneven distribution of seismic observatories on the earth s surface ( in particular the lack of observatories on the ocean bottom ) and the uneven spatial distribution of earthquakes , and thus can not be easily reduced . the approximations( e.g. , ray - theoretic , linearized perturbation with respect to a spherically symmetric model , etc . )used to model seismic wave propagation are another significant source of systematic errors ; progress in forward modeling and inversion techniques is leading to reduction of such errors .anelastic attenuation ( absorption ) of seismic waves also places inherent limits on resolving power , especially of deeper and shorter wavelength structure .a 1-d model seismic velocity specifies and , while a 3-d model specifies and . a 1-d modelmay either be a globally averaged model or a model of the depth dependence under some region ; similarly , a 3-d model may either be a global model or may be limited to some particular region .the main focus of seismological research on earth structure has shifted to the quest to infer 3-d earth models . in this context , the role of 1-d models is to provide the starting point for defining a 3-d model as a perturbation to the 1-d starting model .the primary data used to obtain seismic velocity models are the arrival times of seismic body waves ( p- and s - waves that travel through the earth s interior ) .the arrival time data are then analyzed to determine the location and origin time of each earthquake and can then be converted to the travel time from the source to the receiver .a large dataset of travel time data for many earthquakes is then inverted to obtain a new earth model , and the earthquake location process is then updated .this process is iterated several times until convergence is obtained .travel time data are in some cases supplemented by surface wave dispersion data ( the frequency dependence of the phase and group velocities of seismic surface waves ) or free oscillation data ( the frequencies of several hundred of the longest period modes , which are basically equivalent to surface waves ) .a recent trend is to use the seismic waveforms themselves ( the recorded displacement of the ground as a function of time ) , rather than secondary data such as the travel times , as the data in the inversion .improvements in data and in inversion methodology over the past 20 years have led to steady improvement in seismic velocity models .two well known 1-d models are the `` preliminary reference earth model '' ( prem ) of dziewonski and anderson ( 1981 ) and model ak135 ( kennett _ et al . _ , 1995 ) .the latter is based on a more extensive dataset than the former , and is therefore more accurate .research on 3-d earth structure is a highly active field ; recent reviews by garnero ( 2000 ) and nataf ( 2000 ) provide a useful starting point .lateral variation of elastic properties and density is greatest in the crust and uppermost mantle , but the density of broadband seismic observatories used for global seismology is far too small ( especially in view of the non - uniform geographical distribution ) to determine the lateral heterogeneity of the `` crustal structure '' ( where this term includes both the crust and the uppermost mantle ) .geophysicists must therefore use data collected from various local and regional surveys to correct for the effect of crustal structure so that their data can then be analyzed to determine 3-d earth structure on a global scale .two widely used models for this purpose are crust 5.1 ( mooney _ et al_. , 1998 ) , which has a resolution of 5 ( i.e. , about 500 km ) , and its successor , crust 2.0 ( http://mahi.ucsd.edu/gabi/rem.dir/crust/crust2.html ) , with a resolution of 2 .these models are not intended as accurate models of the crust , but rather are intended as `` pretty good '' models , for the purpose of removing crustal effects .physicists planning neutrino beam experiments should exercise appropriate caution when using these models .both global and regional density models are subject to considerable uncertainty .global scale density models are typically derived by applying an equation of state , which is an empirical approximation , to seismic velocity models ( e.g. , bullen , 1975 ) .crustal density models are derived using a variety of empirical relations between seismic velocities and densities ( see mooney __ , 1998 ) .it is striking that , especially for the case of sedimentary rocks , many of these empirical relations were published in the 1970s , which suggests that there has not recently been a high level of activity in this field .it is difficult to quantify the uncertainty of published density models .one interesting approach is that of kennett ( 1998 ) .he exploited the fact that the frequencies of the longest period modes of the earth s free oscillations depend separately on the elastic constants and the density to a marginally resolvable extent to conduct the following numerical experiment .he fixed the seismic velocities and density of his earth model to the values of the prem model , and constructed a random ensemble of density models centered around the prem model .he then calculated the free oscillation eigenfrequencies for each model and compared them to the observed eigenfrequencies to construct a set of the 50 best fitting models .these density models , all of which can be said to fit the free oscillation data acceptably , have a range of about per cent in the upper mantle .this should not be regarded as a conclusive error estimate , but it is one reasonable indication of the general level of uncertainty of present 1-d global density models .let us consider a hypothetical neutrino beam experiment ( fig .[ fig1 ] ) with a neutrino source in tokyo and a detector in shanghai .note that the neutrino beam follows a straight line , but a seismic wave from tokyo to shanghai ( or vice versa ) follows a curved path ( the path of minimum travel time ) .thus it is not possible to infer the physical properties of the neutrino beam path based only on observations of seismic waves traveling from tokyo to shanghai .published 3-d earth models , which were obtained by analyzing a large dataset using many sources and receivers , can be used to obtain a seismic velocity profile along the neutrino beam path , which can then be empirically converted to density .if the accuracy of the density profile obtained using the above procedure is deemed insufficient , further information could in principle be obtained by conducting a seismic observation campaign with receivers along the entire great circle from tokyo to shanghai .however , the fact that much of the beam path lies under the oceans would greatly complicate such a campaign .figure [ fig2 ] shows the various density profiles under the hypothetical tokyo - shanghai path , taken from model crust 2.0 . as shown in fig .[ fig2 ] , the variation between the various density profiles is per cent in the depth range from 1020 km and per cent in the depth range from 2030 km .the variations in density are due to the differences in the physical properties of the various types of geological units , but can also be regarded as a crude indicator of the general level of uncertainty of the density . as , generally speaking , the amplitude of the earth s lateral hetereogeneity decreases with increasing depth , the variability of per cent in fig .[ fig2 ] can reasonably be regarded as as an upper bound on the uncertainty .note that the density in the depth range 2030 km in the rightmost column of fig .[ fig2 ] ( 3.35 g/ ) is the value for the uppermost mantle , and is about 10 per cent higher than the density of the lowermost crust .neutrino beam physicists should be aware of the various uncertainties and limitations of present geophysical knowledge of the earth s density distribution , as discussed in this paper .the planning of neutrino beam experiments should include simulation of the data reduction process , including a propagation of error analysis , to study the effect of this uncertainty .three possible scenarios can be envisioned .( 1 ) the uncertainty of present density models poses no significant problems ; ( 2 ) moderate reduction of the uncertainty , through more detailed analysis of existing data , is required : ( 3 ) significant reduction of this uncertainty , by conducting a large scale campaign of geophysical observations , is required . obviously ,scenario ( 1 ) would be most desirable , while scenario ( 3 ) would be discouraging .this issue should be resolved at an early stage of the planning of neutrino beam experiments .bullen , k. , _ the earth s density _ , chapman & hall ( london , 1975 ) .
several proposed experiments will send beams of neutrinos through the earth along paths with a source - receiver distance of hundreds or thousands of kilometers . knowledge of the physical properties of the medium traversed by these beams , in particular the density , will be necessary in order to properly interpret the experimental data . present geophysical knowledge allows the average density along a path with a length of several thousand km to be estimated with an accuracy of about per cent . physicists planning neutrino beam experiments should decide whether or not this level of uncertainty is acceptable . if greater accuracy is required , intensive geophysical research on the earth structure along the beam path should be conducted as part of the preparatory work on the experiments . long baseline neutrino experiments , earth s density distribution 13.15.+g 14.60.pq 23.40.bw 91.35.-x
the ever - increasing number of cars on roads today has led to a burden on the management of transportation infrastructure .a prime example we observe today is vehicle owners finding it difficult to search for parking spaces .this is more evident in large cities and in prime locations where it is not uncommon to find vehicles moving around inside a parking lot anticipating a parking space to free up .in addition to the driver discomfort and frustration , searching for a parking space leads to a significant loss of personal time and an increase in fuel consumption .searching for a parking space in an optimized manner is thus a problem that demands the attention of researchers .the availability of real - time parking information has the potential for immense time and economic savings as drivers know in advance the presence of empty parking spots at the end of their trips .a highly desirable feature in such a system is that the source of real - time information must be independent of infrastructure .parking solutions that depend on infrastructure , for example the use of on - site cameras , often incur significant costs .more importantly the implementation of such a solution is not generic with respect to the layout of the parking lot .modern vehicle advancements have led to cars that are today equipped with vision and range - based navigation sensors and offer a source of real - time parking space information which is independent of surrounding infrastructure .the focus of researchers has thus shifted towards modifying the navigating sensors and algorithms to tap real - time information on the occupancy of parking spaces , and subsequently relaying the useful information to a wide audience _. even though exploiting data from vehicle sensors for generating parking information is now close to a reality , research on parking strategies and policies that maximize the quality of this real - time information system is still nascent .hence this paper aims to bridge the research gap by introducing a simulation - based approach to develop and investigate optimal parking policies for an intelligent parking system that depends on information collected from vehicle sensors .we propose a parking simulator that simulates a real parking lot in the city of ann arbor in michigan .cars entering and leaving the lot are modelled as probe cars ( equipped with sensors ) and non - probe cars ( without sensors ) to implement a `` connected vehicle '' environment .the developed policies are evaluated using the simulator based on the accuracy of occupancy prediction and a single optimal policy is determined .the paper is structured as follows : i ) section ii reviews related work on existing parking search models and optimal path planning in parking .ii ) section iii describes the developed parking simulator comprised of four modules , and provides a description on the optimal parking space allocation policy for probe cars to maximize the quality and availability of real - time parking information iii ) section iv offers a visualization of the parking simulator and a comparison of results between parking information collected one - way and two - ways by probe cars .iv ) section v provides a summary of the current work and a final conclusion .we discuss previous studies in literature on parking search optimization and related works .luo et al . proposed a parking detection algorithm based on data collected from range - based sensors on vehicle , and improved its accuracy by introducing slam method _. most of input data and configurations in simulation are based on this project .bogoslavskyi et al .proposed a markov decision process ( mdp ) based planner to calculate paths that minimize the time it takes to search for a parking space and walking up to a target destination after parking _. they calculated the paths using occupancy probabilities of parking spaces .the occupancy probabilities are considered uncertain and derived from visual sensor data and prior probability estimates of the spaces .farkas and lendak used simulation to study the improvement in parking search cruise time from crowd - sensing real - time parking information in an urban environment _ . real - time information on occupancy of parking spaces alone may not improve parking search time for drivers . for example , tasseron et al . used simulation to understand the impact of disseminating on - street parking information using vehicle - vehicle communication and vehicle - sensor communication .they concluded that , in contrary to theoretical expectations , the cruise time for searching for parking spots does not decrease significantly and may also increase , even under occupancy rates as high as 90% to 95% .the authors link it to the more likelihood of drivers parking their cars before reaching their destinations _. in another undesirable situation , broadcasting real - time information on parking spaces can negatively affect system performance by impacting driver behavior in unexpected ways .wahle et al .discussed several studies on the various negative impacts of sharing real - time parking space information with too many drivers _. under these scenarios , our study assumes that parking policies can be used as a medium to control parking probabilities and the undesirable effects of excess information while achieving reduction in parking search times for the entire system of vehicles .agent - based simulation models that involve parking space detection must be based on suitable route choices for probe cars in order to maximize the sensing of the parking spaces and maintain latest information .several models in the fields of robotics and computer science have dealt with this type of path planning problem .singh et al . proposed an efficient algorithm for the multi - robot informative path planning problem ( mipp ) to generate informative paths while maximizing a sub - modular function like mutual information _. martinez - cantin et al .proposed a bayesian optimization method for a mobile robot planning its path for optimal sensing of the environment under time constraints _ . chekuri et al .set up an `` orienteering problem '' for a weighted and directed graph .nodes of the graph are visited by a walk and the algorithm developed maximizes a submodular set function associated with the nodes visited _the simulator consists of three modules , event module , routing module and scanning module , whose output are connected to a data visualizer .the event module generates consequent arrivals and departures of both probe cars and normal cars , and each car will follow the route decided by the routing module . finally , the scanning module will be activated only for probe cars and the estimated states of parking spaces along its trip will be updated using bayes rule .all three modules transmit their status to the visualization module simultaneously .consequent events in the event module include arrivals and departures of probe cars ( type 1 car ) or normal cars ( type 2 car ) .it is reasonable to formulate the parking arrival / departure process as a queue as a service system , where servers represents parking spaces respectively , and is the queue capacity , i.e. maximum number of cars waiting in the parking lot for next available parking space . arrival process s intensity is as a function of time during the day , and the type of the next arriving car is randomly decided with fixed ratio .the arrival intensity follows a piecewise function in units of cars per hour .it is set to emulate the pattern of intra - city traffic with average rate of 120 vehicles per hour , whose intensity is much higher in early morning and late afternoon when people drive in and out for the regular work hours , and slightly higher at noon when people drive during their lunch break _. each arriving car will be assigned to a parking space according to a specified policy in the routing module unless the number of cars in the system . the service completion time ( time spent in the parking lot )are exponential variable with constant parameter .first come first serve rule is applied in the queue when all parking spaces are full , and cars arrive when the queue buffer has reached the capacity will leave the system immediately . the system state ( ) variable is a vector of dimensions .the first elements are boolean variable indicating the occupancy of parking spaces accordingly .the element represents the total number of cars in the system , and the last element indicates the number of cars in the queue .counter variables are total number of arrivals of probe cars / normal cars by time , and are total number of departures by time . by tracking these two set of variables ,we are able to observe how the system evolves with time by generating a discrete event list , which can be referred in the general parallel servers queue simulations in _ . the discrete event simulation that generates an event list is presented in the pseudocode below : * variables : * : total time of simulation actual status of parking space * initialization : * generate and indicating variable .if , set , ; else , set , . generate and reset . assign a parking space according to the parking policy . ( continued ) generate and reset . activate the routing and scanning models . activate the routing and scanning models . generate and reset .assign a parking space according to the parking policy . generate and reset . activate the routing and scanning models . generate and reset . one major objective of this simulation research is to find the impacts of different parking assignment policies on the service performance , which are integrated in the routing module .this module has two functions .one is to assign an available parking space to the arrived car , and the other is to generate driving routes for probe cars when they arrive or depart , which will be used in the scanning model later . for sake of simplifying workflow , the routing module should not intervene the event module as a precedent module , thus we assume that an arriving vehicle instantaneously travel to their parking locations .we evaluate four policies for assigning cars to parking spaces : * random assignment : both types of cars are assigned to available parking spaces randomly , which is expected to represent the average performance of the system . * nearest parking : both types of cars park to the available parking space closest to the entrance .this is to simulate a destination - oriented parking policy assuming the entrance is the final destination for all drivers . * maximum satisfaction guidance : normal cars park to the space closest to the entrance while probe cars park to the space which is estimated to be most likely empty . in other words, this is to function as the maximum exploitation policy . * near - optimal guidance : normal cars park to the space closest to the entrance while probe cars park to the space which will maximize information gain from scanning . in other words, this is to function as the maximum exploration policy . for generating driving routes, cars always arrive in the shortest path but there are two policies for departure routes .one is that the roads in the parking lot are two - way so cars follow the same path as they arrive .the other is that the roads are one - way so cars follow a different path when leaving . for an unregulated parking lot ,the direction of cars driving in and out is always a mix of both routes so it is sufficient to research only these two extreme cases . for each arriving car at time , the action consists of assigning to a parking space and choosing the route on a vertex - edge graph . in pursuance of improving the quality of parking guidance services , the principal goal is to accelerate the exploration process by distributing probe cars optimally .a very natural way to quantify the informativeness of a chosen route is mutual information , more specifically , the information gain from scanning vertex ( parking spaces ) of route .we only consider the posterior of each action because of the instantaneous routing assumption , thus the entropy is measured over the vertex along the route . where represents the estimated status of parking space , whose measure is between 0 and 1 . is the entropy of the all vertex , and is the conditional entropy by taking action representing the observed set .thus the mutual information measures the uncertainty reduction resulting from a chosen route .chekuri et al .shows that is a submodular function , which possesses diminishing return property : the more locations already been sensed , the less information gained by sensing new locations _. furthermore , as the principle of optimality is strictly obeyed in sequence of posterior of scanning , we can find optimal policy for each probe car by maximizing by using direct policy search on the grounds of these facts : 1 . for cases that , i.e. a mixed arrivals of probe cars and normal cars , seeking for optimal policy becomes a np - hard partial observable markov decision process ( pomdp ) because the actual state is only known for those occupied by probe cars .the controllable objects are only probe cars , whose departure time is also a random variable since the profile of its departure path is stochastic .the action space is a high - dimensional compound matrix of routing module and scanning module .inasmuch as it is prohibitive to find an explicit optimal policy , we implement an alternative approximation method using direct policy search at each step especially in the case that the candidate policy space is relatively small . in practice of robotics path planning _this is proved to an efficient near - optimal policy to maximize information gain .the scanning module updates the estimated system state .when a probe arrives or departs , the scanning module is activated .we use recursive bayesian updating in this module .let be a random variable of parking space at time .the estimated state of parking space at time is , which is determined by we are only interested in cases when is not null . according to bayes theorem , after each measurement , we use bayes theorem to compute the posterior probability that the parking space is occupied .the likelihood data are from the field tests in previous research , we conducted field tests to test the effectiveness of radar for detecting occupancy of parking spaces _. from this experience , we calculated the prior and likelihood data in table [ table_bayesian_1 ] and table [ table_bayesian_2 ] .if the measurement is , then .bayesian updating for [ cols="^,^,^,^",options="header " , ] therefore the posterior at time can be expressed as : the scanning module using recursive bayesian updating is presented in the pseudocode below : * variables : * , in which represents the probability that parking space is occupied , * initialization : * ( * ) ( * * ) the updating of estimated matrix also includes a discount factor since the perceived parking states will diminish to unknown status ( ) at each time step . for a parking space that was scanned at but not scanned during , calculating the error of estimation at parking space following this operator the total absolute error of all parking spaces is therefore visualizer is developed as a back - end python and a front - end web module .the visualizer is interfaced with the other simulation modules and illustrates the guided movement of probe cars and non - probe cars through the parking lot .the movement of cars is overlaid on an aerial image of the parking lot used in field test in _. the parking lot , with 160 parking spaces , is converted into a node - and - edge graph as shown in figure [ figure snapshot ] .each node represents a parking space and the edges connecting the nodes are paths for cars to navigate inside the parking lot .differing color schemes are used for the vehicles - red for probe cars and blue for non - probe cars . * * * * parameters of the simulator s modules are set as followed . in the event module ,arrival events are assumed to be a non - homogeneous poisson process with arrival rate as a piecewise function of time in minutes .the parking time follows an exponential distribution with . in the scanning module ,the initial state of all parking spaces are empty with estimation .the discount factor of estimation is .the scanning range of a probe car is at most 6 nodes surrounding the current position .we use the relative error of posteriors to quantify the performance of each policy . the relative error between the predicted occupancy and actual occupancyis calculated for increasing market penetration of probe cars in percentage , which is defined as : the average error of a given event list is calculated over , which are consequent events in the list : since the arrival rate and number of vehicles in the parking lot change with time , the relative error fluctuate as a function of system state .their relationship is shown in figure [ figure_errorwithtime_twoway ] of one - way parking and figure [ figure_errorwithtime_oneway ] of two - way parking both with the percentage of probe cars in arrivals . ) . ] ) . ]the flow of arriving probe cars and normal cars in the simulation follows the pattern of typical daily intra - city traffic in literature _ . in both cases , at zero time the relative error is large across all the policies because of the initial conditions , but tends to drop considerably as information is gained , and keeps oscillating as information variation occurs .notice that the near - optimal guide policy is stable over time in contrast to the other policies whose estimation errors fluctuate over time .in addition this developed policy is less sensitive to the number of parked vehicles in the system while the errors of others increase when the number of parked vehicles drops .the very few peaks in its errors are caused by the diminishing of information , seeing that there are no probe cars arriving during that time and estimators fall back into the undecided range .another observation is that the oscillation of errors in one - way case is relatively smaller in size than in two - way case for all policies .this is because one - way path allows probe cars to scan towards different direction from the route their arrived .ideally , if there is no discount factor in estimations , choosing any parking space in the same row has equivalent effects on the error means . in order to observe the average performance of each policy ,the monte carlo method is applied to generate event list repeatedly .the percentage of probe cars in arrivals ranges from 0.1 to 0.9 with step 0.1 .the simulation is repeated for 1,000 times for each .the average relative error of estimations with increasing percentage of probe cars in the arrivals are of two - way parking is shown in figure [ figure_twoway ] , and the result of one - way parking is shown in figure [ figure_oneway ] .the near - optimal parking policy outperforms in both cases in average . on the contrary ,the high satisfaction policy which assigning cars to most probably available parking space is always modest in contributing to the estimations .this result is intuitive because over - exploiting the given information will sacrifice exploring the unknown regions .this trade - off has to be considered through the decision - making which is able to improve the parking guidance and also guarantee to satisfy individual drivers demands .another important finding is that using the near - optimal policy can significantly compensate for low fleet penetration of probe cars .as the result shows , the performance of near - optimal policy is acceptable even with low penetration of probe cars , and the relative error can only be slightly reduced when the percentage increases from 10% to 90% . according to the market prediction in literature , even by assuming that the growth of adas market follows the logistic function , the penetration rate of probe cars will be less than 20% by 2020 , which is not adequate for other policies . therefore we can conclude that applying the near - optimal policy will enable covering the parking lot with low percentage of cars equipped with sensors . on the other hand we should not overstate the effect because the assumption that the near - optimal policy always directs drivers to available parking spaces . in practice , there is a chance that our policy directs drivers to a space that is actually not available . in future workthis adverse event can be incorporated into the analysis .this paper described how we build a parking simulator consisting of event module , routing module , scanning module and visualizer .four parking guidance policies are tested on this platform with mixed traffic of probe cars and normal cars . comparing to the other policies , the near - optimal policy stably and accurately estimates the parking occupancy in the repeated experiments . a further discussion on the trade - off of exploitation and exploration in optimal routing provides insight for an improved parking guide policy for probe cars .future work will focus on building a multistage stochastic programming model for parking .first part of the work is to formulate a cost function for optimal parking guidance policy including the misguide penalty .second task is to extend the current one - stage policy to a multistage policy in scenarios that the arriving car is assigned to a occupied parking space .this will require the compound modeling of generating events and routing .these improvements of simulation will be able to give insight into a more comprehensive exploration - exploitation parking guidance policy .this project was funded by mobility transformation center . 9 luo , q. , r. saigal and r. hampshire , searching for parking spaces via range - based sensors .preprint , 2016 . _arxiv:1607.06708 [ cs.ro ] _ bogoslavskyi , i. , l. spinello , w. burgard and c. stachniss , where to park ? minimizing the expected time to find a parking space . _ 2015 ieee international conference on robotics and automation ( icra ) _ , seattle , wa , 2015 , pp . 2147 - 2152 .doi : 10.1109/icra.2015.7139482 farkas , k. and i. lendak , simulation environment for investigating crowd - sensing based urban parking ._ models and technologies for intelligent transportation systems ( mt - its ) _ , 2015 international conference on , budapest , 2015 , pp .320 - 327 .doi : 10.1109/mtits.2015.7223274 tasseron , g. , k. martens and r. van der heijden , the potential impact of vehicle - to - vehicle and sensor - to - vehicle communication in urban parking . in _ieee intelligent transportation systems magazine _ , vol .22 - 33 , summer 2015 .doi : 10.1109/mits.2015.2390918 wahle , j. , c. ana lcia , f. klgl and m. schreckenberg .the impact of real - time information in a two - route scenario using agent - based simulation . in _transportation research part c : emerging technologies ._ 10.5 ( 2002 ) : 399 - 417. singh , a. , krause , a. , guestrin , c. , kaiser , w.j . andbatalin , maxim.a . , efficient planning of informative paths for multiple robots . in _ ijcai _ ,vol . 7 , pp .2204 - 2211 . 2007 .martinez - cantin , r , n. de freitas , e. brochu , j. castellanos , and a. doucet . a bayesian exploration - exploitation approach for optimal online sensing and planning with a visually guided mobile robot . in _ autonomous robots _ 27.2 ( 2009 ) : 93 - 103 .chekuri , c. , and m. pal . _ a recursive greedy algorithm for walks in directed graphs ._ 46th annual ieee symposium on foundations of computer science ( focs05 ) .ieee , 2005 .geroliminis , n. dynamics of peak hour and effect of parking for congested cities . in _ transportation research board 88th annual meeting ( no . 09 - 1685 ) _ , 2009 .ross , s. simulation .elsevier , 2013 .luo , q. , s. zhang , r. saigal , r. hampshire .diffusion model for advanced driver assistance systems market penetration prediction ._ figshare , _ 2016 , https://dx.doi.org/10.6084/m9.figshare.3505781.v1 .
real - time parking occupancy information is critical for a parking management system to facilitate drivers to park more efficiently . recent advances in connected and automated vehicle technologies enable sensor - equipped cars ( probe cars ) to detect and broadcast available parking spaces when driving through parking lots . in this paper , we evaluate the impact of market penetration of probe cars on the system performance , and investigate different parking guidance policies to improve the data acquisition process . we adopt a simulation - based approach to impose four policies on an off - street parking lot influencing the behavior of probe cars to park in assigned parking spaces . this in turn effects the scanning route and the parking space occupancy estimations . the last policy we propose is a near - optimal guidance strategy that maximizes the information gain of posteriors . the results suggest that an efficient information gathering policy can compensate for low penetration of connected and automated vehicles . we also highlight the policy trade - off that occur while attempting to maximize information gain through explorations and improve assignment accuracy through exploitations . our results can assist urban policy makers in designing and managing smart parking systems .
the characterization of porous media at the pore level is undergoing a revolution . through the use of new scanning techniques, we are capable of reconstructing the pore space completely , including the tracking of motion of immiscible fluids .a gap is now appearing between the geometrical characterization of porous media and our ability to predict their flow properties based on this knowledge .the pore scale may be of the order of microns whereas the largest scales e.g. the reservoir scale may be measured in kilometers .hence , there are some eight orders of magnitude between the smallest and the largest scales . at some intermediate scale ,that of the representative elementary volume ( rev ) , the porous medium may be regarded as a continuum and the equations governing the flow properties are differential equations .the crucial problem is to construct these effective differential equations from the physics at the pore scale .this is the upscaling problem .a possible path towards this goal is to use brute computational power to link the pore scale physics to pore networks large enough so that a continuum description makes sense .alas , this is still beyond what can be done numerically .however , computational hardware and algorithms are steadily being improved and we are moving towards this goal .it is the aim of this paper to introduce a new algorithm that improves significantly on the efficiency of network models .these are models that are based on the skeletonization of the spaces in such a way that a network of links and nodes emerge .each link and node are associated with parameters that reflect the geometry of the pore space they represent .the fluids are then advanced by time stepping some simplified version of equations of motion of fluid .the bottle neck in this approach is the necessity to solve the kirchhoff equations to determine the pressure field whose gradients drive the fluids in competitions with the capillary forces .a different and at present popular computational approach , among several , is the lattice boltzmann method .this method , based on simultaneously solving the boltzmann equations for different species of lattice gases , is very efficient compared to the network approach necessitating solving the kirchhoff equations .however , the drawback of the lattice boltzmann approach is that one needs to resolve the pore space .hence , one needs to use a grid with a finer mask than the network used in the network approach .this makes the lattice boltzmann approach very efficient at the scale where the actual shape of the pores matter , but not at the larger scale where the large scale topology of the pore network is more important .further methods which resolve the flow at the pore level are e.g. smoothed particle hydrodynamics and density functional hydrodynamics . when network models are so heavy numerically that the networks that can be studied are not much larger than those studied with the pore scale methods , the latter win as they can give a more detailed description of the flowhowever , if the computational limitations inherent to network models could be overcome , they would form an important tool in resolving the scale - up problem : at small scale network models would be calibrated against the methods that are capable of resolving the flow at the pore level . on large scales , their resultsmay be extrapolated to scales large enough for homogenization , i.e. , replacing the original pore network by a continuum . as pointed out above, the bottleneck in the network models is the necessity to determine the pressure field at each time step . when the time steps are determined by the motion of the fluid interfaces, these will be small as they typically are set by the time lapse before the next interface reaches a node in the network .time stepping allows detailed questions concerning how flow patterns develop in time to be answered . that is , the time stepping provides a detailed sequence of configurations where each member of the sequence is the child of the one before and the parent of the one after . if the quantities that are calculated are averages over configurations , time stepping will provide too much information ; for averages the _ order _ in which the configurations occur is of no consequence .if the order in which the fluid configurations occur is scrambled , the averages remain unchanged .this is where the monte carlo method enters .it provides a way to produce configurations that will result in the same averages as those obtained through time stepping .the order in which the configurations occur will be different from those obtained by time stepping .the time stepping procedure necessitates that there are tiny differences between each configuration in the sequence , since the time steps have to be small .this limitation is overcome in the monte carlo method which we will describe here .this makes the monte carlo method much more efficient than time stepping as we will see . in section [ sec : network ]we describe the network model we use to compare the monte carlo method with time stepping , see aker et al . andknudsen et al . . in the next section [ sec : metropolis ] ,we start by explaining the statistical mechanics approach to immiscible two - phase flow in porous media that lies behind the monte carlo algorithm we propose . in particular, we derive the configuration probability the probability that a given distribution of fluid interfaces in the model will appear .this is also known as the _ ensemble distribution _ in the statistical physics community .based on this knowledge , we then go on to describe the monte carlo algorithm itself .this section is followed by section [ sec : results ] where we compare the monte carlo method with time stepping using the same network model described in section [ sec : network ] .we then go on to compare the efficiency in terms of computational cost of the two methods .we end this section by discussing the limitations of the monte carlo algorithm as it now stands and point towards how these may be overcome .we end by section [ sec : conc ] where we summarize the work and draw our conclusions .in order to have a concrete system to work with , we describe here the details of the network model we use .the model is essentially the one first developed in references . for simplicity we do not consider a reconstructed pore network based on a real porous medium , we simply use a two - dimensional square network , with disorder in the pore radii , oriented at 45 with respect to the average flow direction as shown in figure [ fig2 - 1 ] .as described in , we use _ bi - periodic boundary conditions . _hence , the network takes a form of the surface of a torus . in this way, the two - phase flow enters a steady state after an initial transient period .this steady state does _ not _ mean that the fluid interfaces are static .rather , we use capillary numbers high enough so that fluid clusters incessantly form and break up . by _ steady state _we mean that the macroscopic averages averages over the entire network are well defined and do not drift .the network contains links .all links have equal length , but their radii have been drawn from a uniform distribution of random numbers in the interval ] is the position of the interface . the capillary pressure is given by the young - laplace equation \;,\ ] ] where is the surface tension , the contact angle between the interface and the pore wall .we set dyn / cm . is the average link radius .we assume that the link has a shape so that attains the given dependence .it has been chosen so that and .the washburn equation then becomes \;,\ ] ] where is the viscosity . and are the fractions of the link length that cover the non - wetting and wetting fluids respectively so that .we set poise .we define the capillary number as where is an average over all links .a pressure difference is applied across the network .this is done in spite of the network being periodic in the direction of the pressure difference , see knudsen et al . . by demanding balance of flow at each node using the washburn equation ( [ eq : wash ] ) ,we determine the pressures ( ) at the nodes .this is done by solving the corresponding matrix inversion problem by using the conjugate gradient algorithm . when the pressures at nodes are known , the flow here between neighboring nodes and connected by a link is calculated using equation ( [ eq : wash ] ) .knowing the velocity of the interfaces in each link , we then determine the time step such that any meniscus can move a maximum distance , say , one - tenth of the length of corresponding link in that time .all the interfaces are then moved accordingly and the pressure at the nodes are determined again by conjugate gradient algorithm .this is equivalent to event - driven molecular dynamics .when an interface reaches the node , the interface will spread into the links that are connected to the node and which have fluid entering them from the node .the rules for how this is done are described in detail in knudsen et al .we first describe the theory that lies behind the monte carlo algorithm that we present . we need to introduce the concepts of _ configuration _ , and _ configuration probability , _ also known as the ensemble distribution in the statistical mechanics community .we then go on to derive the configurational probability .armed with this , we construct the monte carlo algorithm after having presented a short review of the metropolis version of monte carlo .sinha et al . studied the motion of bubbles in a single capillary tube with varying radius .suppose that the capillary tube has a length and a radius that varies as ] is and it has a width of .since the system is one dimensional , all bubbles move with the same speed .the washburn equation is then \;,\ ] ] where and solving the equations of motion ( [ eq : washmany ] ) gives .we may invert to get .hence , we then have for all .suppose now we have a function , analogous to the one introduced in equation ( [ eq : fxave ] ) .its time average is where where .this is precisely the same expression as in ( [ eq:1dpi ] ) .we now turn to complex network topologies . for concreteness, we may imagine a two - dimensional square network .however , the arguments presented in the following are general . a configuration is given by the position of all interfaces .let us denote that , where is the position of the interface .hence , contains information both on which link the interface sits in and where it sits in the link .a flow passes through the network .the flow equations for the network consist of a washburn constitutive equation for each link combined with the kirchhoff equations distributing the flow between the links .the motion of the interfaces are highly non - linear , but of the form .solving these equations gives .again we consider a function of the position of the interfaces .its time average is here we have inverted so that we have and then substituted .the configurational probability is defined as before , let us now choose to be an interface moving in a link that carries all the flow in the network .such a link is a capillary tube connected in series with the rest of the network . in this casewe have , where is the total flow .hence , we have we have in the discussion so far compared the time evolution of a given sample defined by an initial configuration of interfaces .we now imagine _ an ensemble _ of initial configurations of interfaces .each sample evolves in time and there will be a configurational probability ( [ eq : piintq0 ] ) for each .this will have the same value for each configuration that corresponds to the same flow .hence , we have the configurational probability this equation is the major theoretical result of this paper : all configurations corresponding to the same are equally probable . intuitively , equation ( [ eq : piintq ] ) makes sense : the slower the flow , proportionally the more the system stays in or close to a given configuration .is the system ergodic? equations ( [ eq : fxave ] ) , ( [ eq : fxavemany ] ) and ( [ eq : fxcomplex ] ) answer this question positively .time averages give , by construction , the same results as configurational averages . in order to present the details of the metropolis monte carlo algorithm that we propose , we first review the general formulation of the metropolis algorithm .we have a set of configurations characterized by the variable , the positions of the interfaces .we now wish to construct a _ biased random walk _ through these configurationsso that the number of times each configuration is visited i.e. , the random walk comes within of the configuration is proportional to .proportional to the probability for that configuration .the metropolis algorithm accomplishes this goal . in order to do so , a transitional probability density from state to state is constructed as where is the probability density to pick trial configuration given that the system is in configuration .it is crucial that is symmetric , equations ( [ eq : met ] ) and ( [ eq : sym ] ) ensure detailed balance , detailed balance guarantees that the biased random walk visits the configurations with a frequency proportional to .the generated configurations follow the ensemble distribution . when we combine equations ( [ eq : piintq ] ) and ( [ eq : met ] ) , we have the metropolis monte carlo algorithm based on equation ( [ eq : metq ] ) consists of two crucial steps .the first step consists in generating a _ trial configuration _ and the second step consists in deciding whether to keep the old configuration or replacing it with the trial configuration .the first step , generating the trial configuration , is governed by the trial configuration probability which must obey the symmetry ( [ eq : sym ] ) .that is , if the system is in configuration , the probability to pick a trial configuration must be equal to the probability to pick as trial configuration if the system is in configuration .suppose the system is in configuration .one needs to define a _neighborhood _ of configurations among which the trial configuration is chosen .if the neighborhood is too restricted , the monte carlo random walk will take steps that are too small and hence would be inefficient . if , on the other hand , the neighborhood is too large , the random walk ends up doing huge steps that will miss the details .we propose generating the trial configurations as follows .our system is shown in figure [ fig3 - 1 ] and consists of links as described in section [ sec : network ] .there is a flow through link connecting the neighboring nodes and .there is a total flow rate in the network given by and a corresponding pressure drop .we choose a randomly positioned sub network as shown in figure [ fig3 - 1 ] .the network consists of links .we lift " the sub network out of the complete network and fold it into a torus , i.e , implementing bi - periodic boundary conditions .the configurations of fluid interfaces in the sub network remains unchanged at this point .we calculate the flow rate in the sub network by solving the kirchhoff equations on the sub network , we _ time step _ the configuration forwards in time while keeping the flow rate constant .we end the time integration when arbitrarily chosen sub network pore volumes have passed through it .the bi - periodic boundaries of the sub network is then opened up and the sub network with the new configuration of fluid interfaces is placed back into the full network .this is then the trial configuration .part of the probabilistic choice of the trial configuration that defines rests on the choice of the sub network : its position is picked at random .hence , if the system is in state or in trial state , the probability to pick a particular sub network is the same .this makes this part of the choice of trial configuration symmetric . when the sub network is time stepped for sub system pore volumes , this is done at constant flow rate .hence , all sub network configurations are equally probable , see equation ( [ eq : piintq ] ) .hence , also this part of the choice of trial configuration is symmetric .the full probability is the probability of picking a given sub network times the probability that a given configuration will occur . combining the two leads to the necessary symmetry ( [ eq : sym ] ) .we point out here that whereas the configurational probability in ( [ eq : piintq ] ) is valid for all configurations , through the way we generate our samples , we are restricting ourselves to physically realistic samples in that they are generated through time stepping parts of the system .we can not at this stage prove that this does not bias our sampling .once the trial configuration has been generated , it is necessary to calculate the total flow rate in the network .we then decide to accept the trial configuration by using ( [ eq : metq ] ) .this defines a monte carlo _ update ._ we repeat this procedure until each link in the network has been part of at least one sub network .this defines a monte carlo _ sweep . _we now present numerical results of the monte carlo simulation considering the model described in section [ sec : network ] and we will compare them with the results by time stepping simulations .simulations are performed for two different ensembles , one is when the total flow rate is kept constant ( cq ensemble ) and the other when the total pressure drop is kept constant ( cp ensemble ) . a network of links ( ) is considered for both monte carlo and time stepping procedure .the sub network size is links ( ) . to identify whether the system has reached the steady state , we measured the quantities as a function of time steps in time stepping and as a function of sweeps in the case of monte carlo .we then identified the steady states when the averages of measured quantities ( eg . and or ) did not change with time or with sweeps .we then take average over time ( time stepping ) or sweeps ( monte carlo ) which give us the time average and the ensemble average , respectively .we average 10 different networks , but with the same sequence of networks for both monte carlo and time stepping .first we present the results for cq ensemble .two capillary numbers , and are used , and for each , simulations are performed for different values of non - wetting saturations in intervals of from 0.05 to 0.95 . with constant , the metropolis monte carlo algorithm becomes very simple .equation ( [ eq : metq ] ) simply becomes in other words , all trial configurations are accepted . in figure[ fig4 - 1 ] we plot the non - wetting fractional flow as a function of the non - wetting saturation where the circles and the squares denote the results from monte carlo and time stepping , respectively . the plots ,as expected , show an s - shape .this is because the two immiscible fluids do not flow equally , and the one with higher saturation dominates .hence , the curve does not follow the diagonal dashed line , which corresponds to , shown in the figure .rather , is less than for low values of and higher than for higher value of .it therefore crosses the line at some point , which is not at .this is due to the asymmetry between the two fluids , as one is more wetting than the other with respect to the pore walls .this behaviour is more prominent for the lower value of , as capillary forces play a more dominant role .the curves from the monte carlo and time stepping calculations fall on top of each other for most of the lower to intermediate range of the saturation values and we only see some difference at very high or low . we will present a more quantitative comparison between the results of monte carlo and time stepping later in section [ sub : limitations ] .the variation of total pressure drop for the two capillary numbers as a function of are shown in figure [ fig4 - 2 ] .similar to the fractional flow plots , we see that the results are same for monte carlo and time stepping for a wide range of .we only see differences at high values of . increases with , reaching a maximum at some intermediate saturation and then decreases again .when increases from zero , more and more interfaces appear in the system causing an increase in capillary barriers associated with interfaces . asthe total flow rate is constant , a higher pressure is needed to overcome the capillary barriers .the decrease of after the maximum is due to the decrease of the number of interfaces blocking the fluids .we now turn to the constant pressure ensemble . herewe keep constant throughout the calculations . in this case, the metropolis monte carlo algorithm , equation ( [ eq : metq ] ) , becomes results for the simulations with constant are shown in figures [ fig4 - 3 ] and [ fig4 - 4 ] .simulations are performed for two different values of , and .the steady - state values of show similar variation with as in the constant ensemble and we see good agreement between the results for monte carlo and time stepping for a wide range of . here varies with the saturation and the corresponding capillary numbers are plotted in figure [ fig4 - 4 ] for monte carlo and time stepping .as discussed before , the number of interfaces first increase with the increase in saturation from zero , reaches a maximum value , and then decreases again as approaches .the pressure is constant here , so the total flow rate decreases with increasing capillary barriers at the interfaces and correspondingly varies as in figure [ fig4 - 4 ] . here again , good match between the results monte carlo and time stepping can be observed .we show in table [ table1 ] the percentage of rejections for the data shown in figure [ fig4 - 4 ] .the number of rejections is in all cases quite small .this can be understood as follows .set and where may be positive or negative .hence , the probability to accept the new configuration is where we have assumed . with a small the probability to reject the trial configuration is small .this is reflected in table [ table1 ] ..the percentage of rejected configurations in the constant ensemble .[ table1 ] [ cols="^,^,^",options="header " , ] here we present a detailed comparative analysis of the computational cost of the two algorithms .we do this by measuring the computational time ( for the monte carlo method and for the time stepping method respectively ) for different system sizes .we use the conjugate gradient method to solve the kirchhoff equations .this is an iterative solver .when the network contains links ( nodes ) , each iteration demands operations . the number of iterations necessary to solve the equations _exactly _ scales as , making the total cost scale as , where .however , in practice , the number of iterations necessary to reach the solution of the kirchhoff equations to within machine precision is much lower than that needed for the theoretically exact solution .as we shall see , is much smaller than four .the number of time steps needed to push one pore volume through the network is .we expect it to depend on as , where is a prefactor essentially measuring the number of time steps on the average it takes for an interface to cross a link . in our calculations , this is of the order of 10 .intuitively , this number should be proportional to the width of the network , , making . in practice , as we shall see , it is slightly larger . for each time step , the conjugate gradient demands operations where is another prefactor .the total computational time ( ) per pore volume is then where .based on the theoretical considerations above , setting and , we have . the actual computational time measured using the ` clock ( ) ` function in cis plotted in figure [ fig4 - 5 ] for and .we find that scales with with an exponent which is much smaller than .measuring and independently gives and , see the insert in figure [ fig4 - 5 ] .for the monte carlo algorithm , each sweep ideally contains individual monte carlo updates .each monte carlo update consists of time stepping a sub lattice of size .hence , the cost of a monte carlo update is when using equation ( [ timets ] ) .however , each time stepping of a sub lattice is followed by solving the kirchhoff equations for the _ entire _ lattice in order to determine for the trial configuration .the cost of this operation is .the time per monte carlo sweep is then =4ab\lambda^{\alpha_\text{ts}-2}l^2+\frac{b}{\lambda^2}\ l^{2+\beta}\;,\ ] ] where and .the factor 4 " signifies that we time step the sub lattice for four pore volumes . by setting and , the first term will dominate compared to the second term on the right hand side of this equation if or where the second term , which scales as , starts dominating .it is this behavior we see in figure [ fig4 - 5 ] : the computational time in the monte carlo method scales according to the first term , i.e. , as .hence , we summarize : the time stepping procedure scales as whereas the monte carlo algorithm scales as , as shown in figure [ fig4 - 5 ]. a closer inspection of figures [ fig4 - 1 ] to [ fig4 - 4 ] shows that the match between the monte carlo and the time stepping procedures is good but not perfect . in this sectionwe discuss the discrepancies between the two methods quantitatively .we show in figure [ fig4 - 6 ] the non - wetting fractional flow for a network using both time stepping and monte carlo with sub network size ranging from to .notice that we also consider the sub - network size which is equal to .the calculations here are done in the constant ensemble with a capillary number ca equal to or . as we see , there is a systematic deviation between the time stepping and the monte carlo results that increases with increasing non - wetting saturation .this deviation is highlighted in figure [ fig4 - 7 ] where the difference between the time stepping and the monte carlo results for different is shown .we note that the difference between the monte carlo and the time stepping decreases with increasing capillary number ca .this is , however , to be expected , as for infinite , any curve , monte carlo or time stepping , must fall on the diagonal of figure [ fig4 - 6 ] . in figure[ fig4 - 8 ] we show the discrepancy between the pressure drop using time stepping and monte carlo for different sub lattice size .the systematics seen in the fractional flow data , figures [ fig4 - 6 ] and [ fig4 - 7 ] , where the difference grows with increasing non - wetting saturation is much less pronounced in this case . in figure [ fig4 - 9 ] , we show histograms over the non - wetting saturation of the links .that is , we measure how much non - wetting fluid each link contains . when the overall non - wetting saturation , there is essentially no difference between the time stepping and the monte carlo result .however , for , there is a difference that depends on the sub lattice size .this difference , measured as the area between the time stepping and the monte carlo histograms , is shown in figure [ fig4 - 10 ] as a function of .the picture seen here resembles that seen for the non - wetting fractional flow ( figure [ fig4 - 6 ] ) : the difference grows with increasing . when the non - wetting saturation is small , the non - wetting fluid will form bubbles or small clusters surrounded by the wetting fluid . as is increased ,these clusters grow in size until there is a percolation - type transition where the wetting fluid starts forming clusters surrounded by the non - wetting fluid .this scenario has been studied experimentally by tallakstad et al .they argued that there is a length scale .clusters that are larger than this length scale will move , whereas clusters that are smaller will be held in place by the capillary forces .the monte carlo algorithm calls for selecting a sub network which is then lifted " out of the system , folded " into a torus and then time stepped .the boundaries of the sub network will cut through clusters and mobilize these .this changes the cluster structure from that of the time stepping procedure . in order to investigate thiswe have studied the cluster structure in the model under monte carlo and time stepping . in order to do this, we identify the non - wetting clusters . to do this ,two nodes are considered to be part of the same cluster if the link between them has a non - wetting saturation more than a threshold value , a clip - threshold .here we use a clip threshold equal to . in figure[ fig4 - 11 ] , we show typical cluster structures for two different non - wetting saturations obtained with monte carlo and with time stepping . for ,the non - wetting clusters are still quite small and there is no discernable difference between the configurations obtained with time stepping and with monte carlo . however , for , there is one dominating cluster in the time stepping case whereas the clusters are more broken up in the monte carlo case .mc , ts , mc , ts , we measure this qualitative difference in cluster structure for by recording the cluster size distribution for the two types of updating , see figure [ fig4 - 12 ] . when following the time stepping procedure , we run the system for pore volumes . during the last pore volumes injected ( of the total ), we measure the cluster size distribution after passing each pore volume of fluids .when using monte carlo , we run the system for 400 monte carlo updates .we record the cluster size distribution for every of the last updates .in both the time stepping and monte carlo runs , we average over samples .the number of links belong to a cluster defines the size of that cluster .the total number of clusters is and the number of clusters of size that we record is .we show in the figure . for and 0.7, there is no discernable difference in the cluster structure between the monte carlo and the time stepping procedures .however , for , there are differences . for every number of clusters during the monte carlo updating procedure is larger than for the time stepping procedure , except for the largest clusters , the percolating cluster seen in figure [ fig4 - 11 ] .this supports the supposition that the monte carlo breaks up the large non - wetting clusters .clearly , for the monte carlo algorithm to be perfected , this tendency of chopping up large non - wetting clusters needs to be counteracted .presumably , this is a problem that decreases with increasing system and sub lattice size as it is a boundary effect .we have in this work presented a new monte carlo algorithm for immiscible two - phase flow in porous media under steady - state conditions using network models .it is based on the metropolis transition probability ( [ eq : metq ] ) which in turn is build upon the configuration probability ( [ eq : piintq ] ) which we derive here . by steady - state conditions , we mean that the macroscopic parameters that describe the flow such as pressure difference , flow rate , fractional flow rate and saturation all have well defined means that stay constant . on the pore level , however , clusters flow , merge , break up , and so on . the flow may be anything but stationary .we described the algorithm in section [ subsub : implementation ] .computationally , the monte carlo algorithm is very fast compared to time stepping .we find that the time stepping procedure when implemented on a square lattice demands a computing time that scales as the linear size of the lattice , , to the fourth power , whereas the monte carlo method scales as the linear size to the second power , see section [ sub : cost ] .however , there is another term that contributes to the computing time in the monte carlo procedure which scales as .this term has a prefactor associated with it which is very small compared to the other term scaling as . for up to about 230 ,this term is small compared to the first one .there are open questions with respect to the metropolis monte carlo approach that we present here .the most important step in the direction of constructing such an approach is to identify the configuration probability ( [ eq : piintq ] ) .the second most important step is to provide a way to generate trial configurations that obey the symmetry requirement ( [ eq : sym ] ) . section [ subsub : implementation ] is concerned with this . *the monte carlo algorithm needs to be generalized to irregular networks , e.g. , those based on reconstructed porous media . +* the necessity to solve the kirchhoff equations for the entire pore network once for every monte carlo update will slow down the algorithm when it is implemented for large systems .ideally , one should find a way to circumvent this necessity .+ * the monte carlo algorithm has a tendency to break up large non - wetting clusters as described in section [ sub : limitations ] .this is a problem for large non - wetting saturations .it is most probably a boundary effect that comes from the way the sub networks are constructed .however , it needs to be overcome if the algorithm is to be useful for the entire range of saturations .+ we have in this article presented a first attempt at constructing a markov chain monte carlo algorithm based on the configurational probability ( [ eq : piintq ] ) .there is no reason not to believe that other ways of constructing such monte carlo algorithms might be possible that are both faster and do not pose the challenges listed above .a. m. tartakovsky and p. meakin , _ a smoothed particle hydrodynamics model for miscible flow in three - dimensional fractures and the two - dimensional rayleigh - taylor instability , _ j. comp. phys . * 207 * , 610 ( 2005 ) .k. t. tallakstad , h. a. knudsen , t. ramstad , g. lvoll , k. j. mly , r. toussaint and e. g. flekky , _ steady - state two - phase flow in porous media : statistics and transport properties , _ phys .lett . * 102 * , 074502 ( 2009 ) .
we present a markov chain monte carlo algorithm based on the metropolis algorithm for simulation of the flow of two immiscible fluids in a porous medium under macroscopic steady - state conditions using a dynamical pore network model that tracks the motion of the fluid interfaces . the monte carlo algorithm is based on the configuration probability , where a configuration is defined by the positions of all fluid interfaces . we show that the configuration probability is proportional to the inverse of the flow rate . using a two - dimensional network , advancing the interfaces using time integration the computational time scales as the linear system size to the fourth power , whereas the monte carlo computational time scales as the linear size to the second power . we discuss the strengths and the weaknesses of the algorithm .
in the past two years , convolutional neural networks ( cnns ) have revolutionized computer vision .they have been applied to a variety of general vision problems , such as recognition , segmentation , stereo , flow , and even text - from - image generation , consistently outperforming past work .this is mainly due to their high generalization power achieved by learning complex , non - linear dependencies across millions of labelled examples .it has recently been shown that increasing the depth of the network increases the performance by an additional impressive margin on the imagenet challenge .it remains to be seen whether recognition can be solved by simply pushing the limits of computation ( the size of the networks ) and increasing the amount of the training data .we believe that the main challenge in the next few years will be to design computationally simpler and more efficient models that can achieve a similar or better performance compared to the very deep networks . for object detection ,a successful approach has been to generate a large pool of candidate boxes and classify them using cnns .the quality of such a detector thus largely depends on the quality of the object hypotheses .interestingly , however , using much better proposals obtained via a high - end bottom - up segmentation approach has resulted only in small improvements in accuracy . in this paper , we show how to exploit a small number of accurate object segment proposals in order to significantly improve object detection performance .we frame the detection problem as inference in a markov random field as in figure [ figure : intro ] , in which each detection hypothesis scores object appearance as well as contextual information using convolutional neural networks .each hypothesis can choose and score a segment out of a small pool of accurate object segmentation proposals .this enables our approach to place more accurate object bounding boxes in parts of the image where an object segmentation hypothesis exists or where strong contextual cues are available .we additionally show that a significant performance boost can be obtained by a sequential approach , where the network iterates between adjusting its spatial scope ( the bounding box ) and classifying its content .this strategy reduces the dependency on the initial candidate boxes obtained by and enables our approach to recover from the potentially bad initial localization .we show that our model , called segdeepm , outperforms the baseline r - cnn approach by with almost no extra computational cost .we get a total of improvement by incorporating contextual information at the cost of doubling the running time of the method . on pascal voc 2010 test, our method achieves improvement over r - cnn and over the current state - of - the - art .object detection and semantic segmentation are one of the core challenges of computer vision .object detection aims at placing a tight bounding box around each ground - truth object of a particular class , while semantic segmentation targets to assign a label to each pixel in a given image .these two tasks , although typically treated separately , are closely related : knowing where e.g. a car is should help us to carve it out in the image . on the other hand , knowingthe car segmentation in an image should help us detect specific instances more accurately .some recent approaches have shown that using cues from one task can greatly benefit the other , whereby holistic models typically result in additional boosts .fidler proposed segdpm , which leveraged state - of - the - art segmentation techniques to score detections better , which boosted the mean average precision ( map ) over the original object detector by on the challenging pascal voc 2010 dataset .this was a very significant improvement at the time where the state - of - the - art performance was at a saturation point .the success of segdpm has given us important insight in exploiting segmentation cues in object detection systems , but its impact fades with the rise of the deep learning methods that can efficiently pre - train with millions of examples . in this paper , we build on the r - cnn framework and show how to exploit segmentation in a very simple and efficient way in order to gain a large improvement over the original approach .our model scores each bounding box candidate via the cnn features as well as our segmentation features . similarly to segdpm , our model allows each bounding box candidate to pick a segment out of a pool of segment and score compatibility between the box and the segment .unlike segdpm , which extracts segmentation features from the final output of semantic segmentation , our model utilizes multiple overlapped object - like segments generated by cpmc and their potentials computed by second - order - pooling to incorporate per - instance segmentation information .moreover , given the segments and their corresponding potentials , our model has the same computational complexity as the original r - cnn .we coin the acronym segdeepm for our approach .our second contribution deals with one of the most common mistakes of the r - cnn approach : duplicate detections of the same object . due to how r - cnn are trained , yes - no depending on whether a box overlaps ground - truth more than 50% or not , the final network produces similar responses across several region proposals intersecting the same object .while some of these problems are dealt with via nms , our analysis shows a high drop in performance due to this issue .we propose an effective way of dealing with this problem by `` looking outside of the box '' .this simple concept gives the network the opportunity to re - adjust its scores by exploiting loose contextual information around each region proposal .meanwhile , a brief analysis on rcnn and segrcnn model reveals that those models mainly fail in duplicated detections on the same object , as well as false - negatives , especially for small objects .we address this issue by introducing context cues in our model .although the role of context in object detection task has been widely recognized in vision community , incorporating such information usually requires elaborate , hand - made model which is hard to train .however , experiments show that the most advanced object detector our visual system spend similar time in recognizing low - resolution images with context information , which indicates human generally perceives context information with little extra effort. motivated by this fact , our segrcnn model adopts a concise yet effective way to utilize context cues by `` looking out of the box '' .our method adopts another cnn model that looks around the object region and combines its response with that of segrcnn .we show that our expanded network could significantly improve detection accuracy and is particular effective for small objects .in the past years , a variety of segmentation algorithms that exploit object detections as a top - down cue have been explored .the standard approach has been to use detection features as unary potentials in an mrf , or as candidate bounding boxes for holistic mrfs . in ,segmentation within the detection boxes has been performed using a grabcut method . in ,object segmentations are found by aligning the masks obtained from poselets .there have been a few approaches to use segmentation to improve object detection . cast votes for the object s location by using a hough transform with a set of regions . uses dpm to find a rough object location and refines it according to color information and occlusion boundaries . in ,segmentation is used to mask - out the background inside the detection , resulting in improved performance .segmentation and detection has also been addressed in a joint formulation in by combining shape information obtained via dpm parts as well as color and boundary cues .our work is inspired by the success of segdpm . by augmenting the dpm detector with very simple segmentation features that can be computed in constant time ,segdpm improved the detection performance by on the challenging pascal voc dataset .the approach used segments computed from the final segmentation output of cpmc in order to place accurate boxes in parts of the image where segmentation for the object class of interest was available . this idea was subsequently exploited in by augmenting the dpm with an additional set of deformable context `` parts '' which scored contextual segmentation features around the object . in ,the segdpm detector was augmented with part visibility reasoning , achieving state - of - the - art results for detection of articulated classes . in ,the authors extended segdpm to incorporate segmentation compatibility also at the part level . in this paper , we build on r - cnn framework and transfer the core ideas of segdpm .we use appearance features from , a rich contextual appearance description around the object , and a mrf model that is able to exploit segmentation in a more efficient way than segdpm . for context , most approaches for object detection either serves as post - processing or based on semantic segmentation results around bounding box region .we argue that context information should be classified into local - context ( the regions around target object ) , spatial - context ( relationship between detected objects in the same image ) and out - of - image context ( image tags or descriptions ) . in this paper, we mainly focus on local - context and build our context features specifically for it .in this paper , we are interested in introducing semantic segmentation and context information to boost object detection .specifically , we utilize region - based bottom - up segmentation followed by class - specific regressor to score each regions , and combine them with both local and context cues . to achieve this goal , we design a unified model that fuses a powerful appearance model , a compact segmentation model and a concise context model .we refer to the efficient segmentation feature in segdpm and design our own variations , which could better capture the information in segments .meanwhile , we also leverage the power of cnns to encode context information in our model .the goal of our approach is to efficiently exploit segmentation and contextual cues in order to facilitate object detection . following the r - cnn setup ,we compute the selective search boxes yielding approximately 2000 object candidates per image . for each boxwe extract the last feature layer of the cnn network , that is fine - tuned on the pascal dataset as proposed in . we obtain object segment proposals via the cpmc approach ,although our approach is independent of this choice .following , we take the top proposals given by an object - independent ranker , and train class - specific classifiers for all classes of interest by the second - order pooling method o2p .we remove all segments that have less than pixels .our method will make use of these segments along with their class - specific scores .this is slightly different than segdpm which takes only 1 or 2 segments carved out from the final o2p s pixel - level labeling of the image . in the remainder of this sectionwe first define our model and describe its segmentation and contextual features .we next discuss inference and learning .finally , we detail a sequential inference scheme that iterates between correcting the input bounding boxes and scoring them with our model .we define our model as a markov random field with random variables that reason about detection boxes , object segments , and context .similar to , we define as a random variable denoting the location and scale of a candidate bounding box in the image .we also define to be a set of random variables , one for each class , i.e. .each random variable represents an index into the set of all candidate segments . here is the total number of object classes of interest and is the total number of segments in image .the random variable allows each candidate detection box to _ choose _ a segment for each class and score its confidence according to the agreement with the segment .the idea is to ( 1 ) boost the confidence of boxes that are well aligned with a high scoring object region proposal for the class of interest , and ( 2 ) adjust its score based on the proximity and confidence of region proposals for other classes , serving as context for the model .this is different from segdpm that only had a single random variable which selected a segment belonging to the detector s class .it is also different from in that the model chooses contextual segments , and does not score context in a fixed segmentation window .note that indicates that no segment is selected for class .this means that either no segment for a class of interest is in the vicinity of the detection hypothesis , or that none of the regions corresponding to the contextual class help classification of the current box .we define the energy of a configuration as follows : where , , and are the candidate s appearance , segmentation , and contextual potential functions ( features ) , respectively .we describe the potentials in detail below .* appearance : * to extract the appearance features we follow . the image in each candidate detection s boxis warped to a fixed size .we run the image through the cnn trained on the imagenet dataset and fine - tuned on pascal s data . as our appearance feature we use the -dimensional feature extracted from the layer . * segmentation : * similar to , our segmentation features attempt to capture the agreement between the candidate s bounding box and a particular segment .the features are complementary in nature , and , when combined within the model , aim at placing the box tightly around each segment .we emphasize that the weights for each feature will be learned , thus allowing the model to adjust the importance of each feature s contribution to the joint energy .we use slightly more complex features tailored to exploit a much larger set of segments than .in particular , we use a grid feature that aims to capture a loose geometric arrangement of the segment inside the candidate s box .we also incorporate class information , where the model is allowed to choose a different segment for each class , depending on the contextual information contained in a segment with respect to the class of the detector .we use multiple segmentation features , one for each class , thus our segmentation term decomposes : specifically , we consider the following features : * segmentgrid - in : * let denote the binary mask of the segment chosen by . for a particular candidate box , we crop the segment s mask via the bounding box of and compute the segmentgrid - in feature on a grid placed over the cropped mask .the dimension represents the percentage of segment s pixels inside the block , relative to the number of all pixels in . where is the block of pixels in grid , and indexes the segment s mask in pixel .that is , when pixel is part of the segment and otherwise .for matching the detector s class , this feature will attempt to place a box slightly bigger than the segment while at the same time trying to localize it such that the spatial distribution of pixels within each grid matches the class expected shape .for other than the detector s class , this feature will try to place the box such that it intersects as little as possible with the segments of other classes .the dimensionality of this feature is .* segment - out : * this feature follows , and computes the percentage of segment pixels outside the candidate box . unlike the segmentgrid - in, this feature computes a single value for each segment / bounding box pair . where is the bounding box corresponding to .the aim of this feature is to place boxes that are smaller compared to the segments , which , in combination with segmentgrid - in , achieves a tight fit around the segments . *backgroundgrid - in : * this feature is also computed with a grid for each bounding box .we compute the percentage of pixels in each grid cell that are * not * part of the segment : with the area of the largest segment for the image .* background - out : * this scalar feature measures the of segment s background outside of the candidate s box : * overlap : * similarly to , we use another feature to measure the alignment of the candidate s box and the segment .it is computed as the intersection - over - union ( iou ) between the box or and a tightly fit bounding box around the segment . where is tight box around , and a bias term which we set to in our experiments .* segmentclass : * since we are dealing with many segments per image , we add an additional feature to our model .we train the o2p rankers for each class which uses several region - aware features as input into our segmentation features .each ranker is trained to predict the iou overlap of the given segment with the ground - truth object s segment .the output of all the class - specific rankers defines the following feature : where is the score of class for segment .segmentgrid - in , segment - out , backgroundgrid - in , and background - out can be efficiently computed via integral images .note that s features are a special case of these features with a grid size .overlap and segment features can also be quickly computed using matrix operations . *context : * cnns are typically trained for the task of image classification where in most cases an input image is much larger than the object . this means that part of their success may be due to learning complex dependencies between the objects and their contextual information ( sky for aeroplane , road for car and bus ) .however , the appearance features that we use are only computed based on the candidate s box , thus hardly capturing useful information from the scene .we thus add an additional feature that looks at a bigger scope than the candidate s box .in particular , we enlarge each input candidate box by a fixed percentage along its horizontal and vertical direction . for big boxes , or those close to the image boundary , we clip the enlarged region to be fully inside the image .we keep the object labels for each expanded box the same as that for the original boxes , even if the expanded box now encloses objects of other classes .we then warp the image in each enlarged box to and fine - tune the original imagenet - trained cnn using these images and labels .we call the fine - tuned network the _ expanded cnn_. for our contextual features we extract the layer features of the expanded cnn by running the warped image in the enlarged window through the network .in the inference stage of our model , we score each candidate box as follows : observe that the first two terms in eq .[ eqn : inference ] can be computed efficiently by matrix multiplication , and the only part that depends on is its last term .although there could be exponential number of candidates for , we can greedily search each dimension of and find the best segment w.r.t .model parameters for each class .since our segmentation features do not depend on the pairwise relationships in , this greedy approach is guaranteed to find the global maximum of .finally , we sum the three terms to obtain the score of each bounding box location .[ cols="<,<,<,<,<,<,<,<,<,<,<,<,<,<,<,<,<,<,<,<,<,<,<,<,<,<",options="header " , ]we evaluate our method on the main object detection benchmark pascal voc .we provide a details ablative study of different potentials and choices in our model in subsec .[ sec : ablative ] . in subsec .[ sec : test ] we test our method on pascal s held - out test set and compare it to the current state - of - the - art methods .we first evaluate our detection performance on set of the pascal voc 2010 detection dataset .we train all methods on the subset and evaluate the detection performance using the standard pascal criterion .we provide a detailed performance analysis of each proposed potential function , which we denote with ( segmentation ) and expanded network ( the contextual network ) in table [ table : val ] .we also compare our iterative bounding box regression approach , referred to as , to the standard bounding box regression , referred to as , .r - cnn serves as our main baseline . to better justify our model, we provide an additional baseline , where we simply augment the set of selective search boxes used originally by the r - cnn with the cpmc proposal set .we call this approach rcnn+cpmc in the table ( second row ) . to contrast our model with segdpm , which originally uses segmentation features in a dpm - style formulation , we simplify our model to use their exact features . instead of hog , however , we use cnns for a fair comparison. we also use their approach to generate segments , by finding connected components in the final output of cpmc - o2p segmentation .this approach is referred to as segdpm+cnn ( third row in table [ table : val ] ) .observe that using a small set of additional segments brings a improvement for rcnn+cpmc over the r - cnn baseline .using a segdpm+cnn approach yields a more significant improvement of . with our segmentationfeatures we get an additional increase over segdpm+cnn , thus justifying our feature set .interestingly , this boost over r - cnn is achieved by our simple segmentation features which require only additional parameters .the table also shows a steady improvement of each additional added potential / step , with the highest contribution achieved by the expanded contextual network . ) . indicates no segments are used . *( right ) * box expansion ratio . disables context and indicates full image context . only contextual features used in this experiment . both plots for pascal voc 2010 .,title="fig:",scaledwidth=24.2% ] ) . indicates no segments are used . *( right ) * box expansion ratio . disables context and indicates full image context . only contextual features used in this experiment . both plots for pascal voc 2010 .,title="fig:",scaledwidth=24.2% ] our full approach , in the setting without any post - processing ,outperforms the strong baseline detector by , a significant improvement .after post - processing , the improvement is slightly lower , achieving a performance gain .we note that we improve over the baseline in 19 out of 20 object classes .the pr curves for the first 10 classes are shown in figure [ figure : pr ] and the qualitative results are shown in figure [ figure : qual ] . a detailed error analysis as proposed in of r - cnn and our detectoris shown in figure [ fig : error ] . [ [ performance - vs .- grid - size - and - of - segments . ] ] performance vs. grid size and # of segments .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we evaluate the influence of different grid sizes and different number of cpmc segments per image .for each cpmc segment we compute the best o2p ranking score across all classes , and choose the top segments according to these scores .figure [ ap_vs_numseg ] , left panel , shows that the highest performance gain is due to the few best scoring segments .the differences are minor across different values of and .interestingly , the model performs worse with more segments and a coarse grid , as additional low - quality segments add noise and make l - svm training more difficult .when using a finer grid , the performance peaks when more segments are use , and achieves an overall improvement over a single - cell grid .figure [ ap_vs_numseg ] , left panel , shows that the highest performance gain is due to the best few scoring segments .segdeepm performs best with .however , the differences are minor across different values of .figure [ ap_vs_numseg ] also indicates that having more segments per image does not necessarily result in higher accuracy , as additional low - quality segments add noise and make l - svm training more difficult .[ [ performance - expansion - ratio . ] ] performance expansion ratio .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + we evaluate the influence of the box expansion ratio used in our contextual model .the results for varying values of are illustrated in figure [ ap_vs_numseg ] , right panel .note that even a small expansion ratio ( in each direction ) can boost the detection performance by a significant , and the performance reaches its peak at .this indicates that richer contextual information leads to a better object recognition .notice also that the detection performance decreases beyond .this is most likely due to the fact that most contextual boxes obtained this way will cover most or even the full image , and thus the positive and negative training instances in the same image will share the identical contextual features .this confuses our classifier and results in a performance loss . if we take the full image as context , the gain is less than . [[ iterative - bounding - box - prediction . ] ] iterative bounding box prediction .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we next study the effect of iterative bounding box prediction .we report a gain over the original r - cnn by starting with our set of re - localized boxes ( one iteration ) .note that re - localization in the first iteration only affects of boxes ( only of boxes change more than from the original set , thus feature re - computation only affects half of the boxes ) .this performance gain persists when combined with our full model .if we apply another bounding box prediction as a post - processing step , this approach still obtains a improvement over r - cnn with bounding box prediction . in this iteration, re - localization affects of boxes .we have noticed that the performance saturates after two iterations .the second iteration improves map by only a small margin ( about ) .the interesting side result is that , the mean average best overlap ( mabo ) measure used by bottom - up proposal generation techniques to benchmark their proposals , remains exactly the same ( ) with or without our bounding box prediction , but has a significant impact on the detection performance .this may indicate that mabo is not the best or at least not the only indicator of a good bottom - up grouping technique .0.24 0.24 0.24 0.24 , using only the contextual features box expansion ratio . disables context and indicates full image context.,scaledwidth=35.0% ] [ [ missing - annotations . ] ] missing annotations .+ + + + + + + + + + + + + + + + + + + + an interesting issue arises when analyzing the top false - positives of our segdeepm .we have noticed that a non - neglible number of false - positives are due to missing annotations in pascal s ground - truth .some examples are shown in figure [ figure : missed_dets ] .these missed annotations are mostly due to small objects ( figure [ figure : missed_dets1 ] , [ figure : missed_dets3 ] ) , ambiguous definition of an `` object '' ( figure [ figure : missed_dets2 ] ) , and labelers mistakes ( figure [ figure : missed_dets4 ] ) .while missing annotations were not an issue a few years ago when performance was at , it is becoming a problem now , indicating that perhaps a re - annotation is needed .we evaluate our approach on the pascal voc 2010 subset in table [ table : test ] . for this experiment we trained our segdeepm model , as well as its potentials ( the cpmc class regressor ) on the pascal voc subset using the best parameters tuned on the / split .we only submitted one result to the evaluation server , thus no tuning on the test set was involved .table [ table : test ] shows results of our full segdeepm ( including all post - processing steps ) .we achieve a improvement over r - cnn with a 7-layer network , and a over the best reported method using a 7-layer network .notice that the best results on the current leader board are achieved by the recently released 16-layer network .this network has million parameters , compared to million parameters used in our network .our approach , with only a few additional parameters , scores rather high relative to the much larger network .our result is `` only '' lower than the very deep state - of - the - art .we also run our method using a recently released 16-layer oxfordnet .the results on / and / are shown in table [ table : val_16 ] and table [ table : test ] respectively . on the set, our segdeepm achieves mean ap and outperforms others in out of object classes . [[ performance - on - pascal - voc-2012 . ] ] performance on pascal voc 2012 .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we also test our full segdeepm model on pascal voc 2012 .we use the parameters tuned on the pascal voc 2010 / split .the result are reported and compared to the current state - of - the - art in table [ table : val2012 ] .0.210 0.210 0.210 0.210 0.210 0.210 0.210 0.210 0.210 0.210 0.210 0.210 0.210 0.210 0.210 0.210 0.210 0.210 0.210 0.210 0.210 0.210 0.210 0.210we proposed a mrf model that scores appearance as well as context for each detection , and allows each candidate box to select a segment and score the agreement between them .we additionally proposed a sequential localization scheme , where we iterate between scoring our model and re - positioning the box ( changing the spatial scope of the input to the model ) .we demonstrated that our approach achieves a significant boost over the rcnn baseline , on pascal voc 2010 test in the 7-layer setting and in the 16-layer setting .the final result places segdeepm at the top of the current pascal s leaderboard .
in this paper , we propose an approach that exploits object segmentation in order to improve the accuracy of object detection . we frame the problem as inference in a markov random field , in which each detection hypothesis scores object appearance as well as contextual information using convolutional neural networks , and allows the hypothesis to choose and score a segment out of a large pool of accurate object segmentation proposals . this enables the detector to incorporate additional evidence when it is available and thus results in more accurate detections . our experiments show an improvement of in map over the r - cnn baseline on pascal voc 2010 , and over the current state - of - the - art , demonstrating the power of our approach .
the construction of parametrizations of low dimensional data in high dimension is an area of intense research ( e.g. , ) .a major limitation of these methods is that they are only defined on a discrete set of data . as a result ,the inverse mapping is also only defined on the data .there are well known strategies to extend the forward map to new points for example , the nystrm extension is a common approach to solve this _ out - of - sample extension _ problem ( see e.g. , and references therein ) .however , the problem of extending the inverse map ( i.e. the _ preimage problem _ ) has received little attention so far ( but see ) .the nature of the preimage problem precludes application of the nystrm extension , since it does not involve extension of eigenvectors .we present a method to numerically invert a general smooth bi - lipschitz nonlinear dimensionality reduction mapping over all points in the image of the forward map .the method relies on interpolation via radial basis functions of the coordinate functions that parametrize the manifold in high dimension .the contributions of this paper are twofold .primarily , this paper addresses a fundamental problem for the analysis of datasets : given the construction of an adaptive parametrization of the data in terms of a small number of coordinates , how does one synthesize new data using new values of the coordinates ? we provide a simple and elegant solution to solve the `` preimage problem '' .our approach is scale - free and numerically stable and can be applied to any nonlinear dimension reduction technique .the second contribution is a novel interpretation of the nystrm extension as a properly rescaled radial basis function interpolant . a precise analysis of this similarity yields a critique of the nystrm extension , as well as suggestions for improvement .we consider a finite set of datapoints that lie on a bounded low - dimensional smooth manifold , and we assume that a nonlinear mapping has been defined for each point , we further assume that the map converges toward a limiting continuous function , , when the number of samples goes to infinity .such limiting maps exist for algorithms such as the laplacian eigenmaps . in practice ,the construction of the map is usually only the first step . indeed , one is often interested in exploring the configuration space in , and one needs an inverse map to synthesize a new measurement for a new configuration in the coordinate domain ( see e.g. , ) . in other words , we would like to define an inverse map at any point .unfortunately , unlike linear methods ( such as pca ) , nonlinear dimension reduction algorithms only provide an explicit mapping for the original discrete dataset .therefore , the inverse mapping is only defined on these data .the goal of the present work is to generate a numerical extension of to all of . to simplify the problem, we assume the mapping coincides with the limiting map on the data , for .this assumption allows us to rephrase the problem as follows : we seek an extension of the map everywhere on , given the knowledge that .we address this problem using interpolation , and we construct an approximate inverse , which converges toward the true inverse as the number of samples , , goes to infinity , using terminology from geometry , we call the _ coordinate domain _ , and a _ coordinate map _ that parametrizes the manifold .the components of are the _coordinate functions_. we note that the focus of the paper is not the construction of new points in the coordinate domain , but rather the computation of the coordinate functions everywhere in .given the knowledge of the inverse at the points , we wish to interpolate over .we propose to interpolate each coordinate function , independently of each other .we are thus facing the problem of interpolating a function of several variables defined on the manifold .most interpolation techniques that are designed for single variable functions can only be extended using tensor products , and have very poor performance in several dimensions .for instance , we know from mairhuber theorem ( e.g. , ) that we should not use a basis independent of the nodes ( for example , polynomial ) to interpolate scattered data in dimension . as a result , few options exist for multivariate interpolation .some of the most successful interpolation methods involve radial basis functions ( rbfs ) .therefore , we propose to use rbfs to construct the inverse mapping .similar methods have been explored in to interpolate data on a low - dimensional manifold .we note that while kriging is another common approach for interpolating scattered data , most kriging techniques are equivalent to rbf interpolants .in fact , because in our application we lack specialized information about the covariance structure of the inverse map , kriging is identical to rbf interpolation .we focus our attention on two basis functions : the gaussian and the cubic .these functions are representative of the two main classes of radial functions : scale dependent , and scale invariant . in the experimental sectionwe compare the rbf methods to shepard s method , an approach for multivariate interpolation and approximation that is used extensively in computer graphics , and which was recently proposed in to compute a similar inverse map . for each coordinate function , we define to be the rbf interpolant to the data , the reader will notice that we dropped the dependency on ( number of samples ) in to ease readability .the function in ( [ one - d ] ) is the kernel that defines the radial basis functions , .the weights , , are determined by imposing the fact that the interpolant be exact at the nodes , and thus are given by the solution of the linear system we can combine the linear systems ( [ one - coord ] ) by concatenating all the coordinates in the right - hand side of ( [ one - coord ] ) , and the corresponding unknown weights on the left - hand side of ( [ one - coord ] ) to form the system of equations , which takes the form , where , , and .let us define the vector .the approximate inverse at a point is given by approximate inverse ( [ rbf_eqn ] ) is obtained by interpolating the original data using rbfs . in order to assess the quality of this inverse, three questions must be addressed : 1 ) given the set of interpolation nodes , , is the interpolation matrix in ( [ gauss_rbf ] ) necessarily non - singular and well - conditioned ?2 ) how well does the interpolant ( [ rbf_eqn ] ) approximate the true inverse ?3 ) what convergence rate can we expect as we populate the domain with additional nodes ? in this section we provide elements of answers to these three questions . for a detailed treatment ,see .in order to interpolate with a radial basis function , the system ( [ gauss_rbf ] ) should have a unique solution and be well - conditioned . in the case of the gaussian defined by the eigenvalues of in ( [ gauss_rbf ] )follow patterns in the powers of that increase with successive eigenvalues , which leads to rapid ill - conditioning of with increasing ( e.g. , ; see also for a discussion of the numerical rank of the gaussian kernel ) .the resulting interpolant will exhibit numerical _saturation error_. this issue is common among many scale - dependent rbf interpolants .the gaussian scale parameter , , must be selected to match the spacing of the interpolation nodes .one commonly used measure of node spacing is the _ fill distance _ , the maximum distance from an interpolation node . for the domain and a set of interpolation nodes the _ fill distance _ , , is defined by in ( [ gauss_rbf ] ) , for the gaussian ( ) and the cubic ( ) as a function of the fill distance for a fixed scale .points are randomly scattered on the first quadrant of the unit sphere in , for from left to right .note : the same range of , from 10 to 1000 , was used in each dimension . in high dimension, it takes a large number of points to reduce fill distance .however , the condition number of still grows rapidly for increasing ., scaledwidth=80.0% ] in ( [ gauss_rbf ] ) , for the gaussian ( ) and the cubic ( ) as a function of the scale , for a fixed fill distance . points are randomly scattered on the first quadrant of the unit sphere in , from left to right .[ cond_w_ep],scaledwidth=80.0% ] owing to the difficulty in precisely establishing the boundary of a domain defined by a discrete set of sampled data , estimating the fill distance is somewhat difficult in practice .additionally , the fill distance is a measure of the `` worst case '' , and may not be representative of the `` typical '' spacing between nodes .thus , we consider a proxy for fill distance which depends only on mutual distances between the data points .we define the _ local fill distance _, , to denote the average distance to a nearest neighbor , the relationship between the condition number of and the spacing of interpolation nodes is explored in fig .[ cond_w_fill ] , where we observe rapid ill - conditioning of with respect to decreasing local fill distance , .conversely , if remains constant while is reduced , the resulting interpolant improves until ill - conditioning of the matrix leads to propagation of numerical errors , as is shown in fig .[ cond_w_ep ] .when interpolating with the gaussian kernel , the choice of the scale parameter is difficult . on the one hand , smaller values of lead to a better interpolant .for example , in 1- , a gaussian rbf interpolant will converge to the lagrange interpolating polynomial in the limit as .on the other hand , the interpolation matrix becomes rapidly ill - conditioned for decreasing . while some stable algorithms have been recently proposed to generate rbf interpolants ( e.g. , , and references therein ) these sophisticated algorithms are more computationally intensive and algorithmically complex than the rbf - direct method used in this paper , making them undesirable for the inverse - mapping interpolation task .saturation error can be avoided by using the scale - free rbf kernel , one instance from the set of rbf kernels known as the _ radial powers _, together with the _ thin plate splines _ , they form the family of rbfs known as the _ polyharmonic splines_. because it is a monotonically increasing function , the cubic kernel , , may appear less intuitive than the gaussian .the importance of the cubic kernel stems from the fact that the space generated by linear combinations of shifted copies of the kernel is composed of splines . in one dimension ,one recovers the cubic spline interpolant .one should note that the behavior of the interpolant in the far field ( away from the boundaries of the convex hull of the samples ) can be made linear ( by adding constants and linear polynomials ) as a function of the distance , and therefore diverges much more slowly than . in order to prove the existence and uniqueness of an interpolant of the form , require that the set be a -unisolvent set in , where _ -unisolvency _ is as follows .the set of nodes is called _-unisolvent _ if the unique polynomial of total degree at most interpolating zero data on is the zero polynomial . for our problem, the condition that the set of nodes be 1-unisolvent is equivalent to the condition that the matrix have rank ( we assume that ) .this condition is easily satisfied .indeed , the rows of ( [ polymatrix ] ) are formed by the orthogonal eigenvectors of . additionally , the first eigenvector , , has constant sign . as a result , are linearly independent of any other vector of constant sign , in particular . in figures[ cond_w_fill ] and [ cond_w_ep ] we see that the cubic rbf system exhibits much better conditioning than the gaussian .we now consider the second question : can the interpolant ( [ one - d ] ) approximate the true inverse to arbitrary precision ?as we might expect , an rbf interpolant will converge to functions contained in the completion of the space of linear combinations of the kernel , .this space is called the _ native space_.we note that the completion is defined with respect to the -norm , which is induced by the inner - product given by the reproducing kernel on the pre - hilbert space .it turns out that the native space for the gaussian rbf is a very small space of functions whose fourier transforms decay faster than a gaussian . in practice ,numerical issues usually prevent convergence of gaussian rbf interpolants , even within the native space , and therefore we are not concerned with this issue .the native space of the cubic rbf , on the other hand , is an extremely large space .when the dimension , , is odd , the native space of the cubic rbf is the beppo levi space on of order .we recall the definition of a _beppo levi space _ of order . for , the linear space , equipped with the inner product , is called the _beppo levi space on of order _ , where denotes the weak derivative of ( multi - index ) order on . for even dimension ,the beppo levi space on of order corresponds to the native space of the thin plate spline .because we assume that the inverse map is smooth , we expect that it belongs to any of the beppo levi spaces . despite the fact that we lack a theoretical characterization of the native space for the cubic rbf in even dimension ,all of our numerical experiments have demonstrated equal or better performance of the cubic rbf relative to the thin plate spline in all dimensions ( see also for similar conclusions ) .thus , to promote algorithmic simplicity for practical applications , we have chosen to work solely with the cubic rbf .the gaussian rbf interpolant converges ( in norm ) exponentially fast toward functions in the native space , as a function of the decreasing fill distance .however , as observed above , rapid ill - conditioning of the interpolation matrix makes such theoretical results irrelevant without resorting to more costly stable algorithms .the cubic interpolant converges at least as fast as in the respective native space .in practice , we have experienced faster rates of algebraic convergence , as shown in the experimental section .we first conduct experiments on a synthetic manifold , and we then provide evidence of the performance of our approach on real data . for all experimentswe quantify the performance of the interpolation using a `` leave - one - out reconstruction '' approach : we compute , for , using the remaining points : , and their coordinates in , .the average performance is then measured using the average leave - one - out reconstruction error , in order to quantify the effect of the sampling density on the reconstruction error , we compute as a function of , which is defined by ( [ localfill ] ) .the two rbf interpolants are compared to shepard s method , a multivariate interpolation / approximation method used extensively in computer graphics .shepard s method computes the optimal constant function that minimizes the sum of squared errors within a neighborhood of in , weighted according to their proximity to .the solution to this moving least squares approximation is given by the relative impact of neighboring function values is controlled by the scale parameter , which we choose to be a multiple of . for our synthetic manifold example , we sampled points from the uniform distribution on the unit sphere , then embedded these data in via a random unitary transformation .the data are mapped to using the first five non - trivial eigenvectors of the graph laplacian .the minimum of the total number of available neighbors , , and 200 neighbors was used to compute the interpolant . for each local fill distance , , the average reconstruction error is computed using ( [ error ] ) .the performances of the cubic rbf , gaussian rbf , and shepard s method versus are shown in fig .[ unitsphere ] .we note that the interpolation error based on the cubic rbf is lowest , and appears to scale approximately with , an improvement over the bound .in fact , the cubic rbf proves to be extremely accurate , even with a very sparsely populated domain : the largest corresponds to 10 points scattered on . , on embedded in , using the cubic ( left ) , the gaussian ( center ) , and shepard s method ( right ) .note the difference in the range of -axis.,scaledwidth=80.0% ] .reconstruction error for each digit ( 0 - 9 ) .red denotes lowest average reconstruction residual . [ cols="^,^,^,^,^,^,^,^,^,^,^,^",options="header " , ] three representative reconstructions for the digit `` 3 '' .the optimal scales ( according to table [ digits_table ] ) were chosen for both the gaussian rbf and shepard s methods .the cubic rbf outperforms the gaussian rbf and shepard s method in all cases , with the lowest average error ( table [ digits_table ] ) , and with the most `` noise - like '' reconstruction residual ( fig .[ three_example ] ) .results suggest that a poor choice of scale parameter with the gaussian can corrupt the reconstruction .the scale parameter in shepard s method must be carefully selected to avoid the two extremes of either reconstructing solely from a single nearest neighbor , or reconstructing a blurry , equally weighted , average of all neighbors .finally , the performance of the inverse mapping algorithms was also assessed on the frey face dataset , which consists of digital images of brendan frey s face taken from sequential frames of a short video .the dataset is composed of gray scale images .each image was normalized to have unit norm , providing a dataset of 1,965 points in .a 15-dimensional representation of the frey face dataset was generated via laplacian eigenmaps .the inverse mapping techniques were tested on all images in the set .table [ frey_table ] shows the mean leave - one - out reconstruction errors for the three methods .[ frey_example ] shows three representative reconstructions using the different techniques .the optimal scales ( according to table [ frey_table ] ) were chosen for both the gaussian rbf and shepard s methods .again , the cubic rbf outperforms the gaussian rbf and shepard s method in all cases , with the lowest average error ( table [ frey_table ] ) , and with the most `` noise - like '' reconstruction residual ( fig .[ frey_example ] ) .inspired by the rbf interpolation method , we provide in the following a novel interpretation of the nystrm extension : the nystrm extension interpolates the eigenvectors of the ( symmetric ) normalized laplacian matrix using a slightly modified rbf interpolation scheme .while several authors have mentioned the apparent similarity of nystrm method to rbf interpolation , the novel and detailed analysis provided below provides a completely new insight into the limitations and potential pitfalls of the nystrm extension .consistent with laplacian eigenmaps , we consider the symmetric normalized kernel , where is a radial function measuring the similarity between and , and is the degree matrix ( diagonal matrix consisting of the row sums of ). given an eigenvector of ( associated with a nontrivial eigenvalue ) defined over the points , the nystrm extension of to an arbitrary new point is given by the interpolant where is the coordinate of the eigenvector .we now proceed by re - writing in ( [ nystrom1 ] ) , using the notation , where , and . ^t = \tilde { { \bm k } } ( { { \bm x } } , \cdot)^t { \bm{\phi}}\lambda^{-1 } { \bm{\phi}}^t { \bm{\phi}}_l \\ & = \tilde { { \bm k } } ( { { \bm x } } , \cdot)^t \widetilde k^{-1 } { \bm{\phi}}_l = \frac{1}{\sqrt{d ( { { \bm x } } ) } } \begin{bmatrix } k ( { { \bm x } } , { { \bm x } } ^{(1 ) } ) & \ldots & k ( { { \bm x } } , { { \bm x } } ^{(n ) } ) \end{bmatrix } d^{-1/2 } ( d^{1/2 } k^{-1 } d^{1/2 } ) { \bm{\phi}}_l\\ & = \frac{1}{\sqrt{d ( { { \bm x } } ) } } { { \bm k } } ( { { \bm x } } , \cdot)^t k^{-1 } ( d^{1/2 } { \bm{\phi}}_l ) .\end{split } \label{nystrom2}\ ] ] if we compare the last line of ( [ nystrom2 ] ) to ( [ rbf_eqn ] ) , we conclude that in the case of laplacian eigenmaps , with a nonsingular kernel similarity matrix , the nystrm extension is computed using a radial basis function interpolation of after a pre - rescaling of by , and post - rescaling by .although the entire procedure it is not exactly an rbf interpolant , it is very similar and this interpretation provides new insight into some potential pitfalls of the nystrm method .the first important observation concerns the sensitivity of the interpolation to the scale parameter in the kernel . as we have explained in section [ conditioning ] , the choice of the optimal scale parameter for the gaussian rbf is quite difficult . in fact , this issue has recently received a lot of attention ( e.g. ) .the second observation involves the dangers of sparsifying the similarity matrix . in many nonlinear dimensionality reduction applications , it is typical to sparsify the kernel matrix by either thresholding the matrix , or keeping only the entries associated with the nearest neighbors of each .if the nystrm extension is applied to a thresholded gaussian kernel matrix , then the components of as well as are discontinuous functions of . as a result, , the nystrm extension of the eigenvector will also be a discontinuous function of , as demonstrated in fig .[ truncated ] . in the nearest neighbor approach ,the extension of the kernel function to a new point is highly unstable and poorly defined . given this larger issue , the nystrm extension should not be used in this case . in order to interpolate eigenvectors of a sparse similarity matrix , a better interpolation scheme such as a true ( non - truncated ) gaussian rbf , or a cubic rbf interpolant could provide a better alternative to nystrm. a local implementation of the interpolation algorithm may provide significant computational savings in certain scenarios .the authors would like to thank the three anonymous reviewers for their excellent comments .ndm was supported by nsf grant dms 0941476 ; bf was supported by nsf grant dms 0914647 ; fgm was partially supported by nsf grant dms 0941476 , and doe award de - scoo04096 .
nonlinear dimensionality reduction embeddings computed from datasets do not provide a mechanism to compute the inverse map . in this paper , we address the problem of computing a stable inverse map to such a general bi - lipschitz map . our approach relies on radial basis functions ( rbfs ) to interpolate the inverse map everywhere on the low - dimensional image of the forward map . we demonstrate that the scale - free cubic rbf kernel performs better than the gaussian kernel : it does not suffer from ill - conditioning , and does not require the choice of a scale . the proposed construction is shown to be similar to the nystrm extension of the eigenvectors of the symmetric normalized graph laplacian matrix . based on this observation , we provide a new interpretation of the nystrm extension with suggestions for improvement . inverse map , nonlinear dimensionality reduction , radial basis function , interpolation , nystrm extension
the first result of the calculus of variations ever discovered must have been the statement that the shortest path joining two points is a straight line segment .another classical variational problem consists in finding , amongst all simple closed plane curves of a given fixed length , one that encloses the largest possible area .it is well known since ancient times that the circle is the shape that encloses maximum area for a given length of perimeter .however , it was not until the eighteenth century that a systematic theory , the calculus of variations ( cov ) , began to emerge .a modern face to the cov is given by the theory of optimal control .economics is a source of interesting applications of the theory of calculus of variations and optimal control .classical examples include the optimal capital spending problem , optimal reservoir control , optimal production subject to royalty payment obligations , optimal maintenance and replacement policy , and optimal drug bust strategy .the following economics problem ( explained briefly here ) has motivated this paper .a standard feature of the theory of the firm is that a profit maximising firm facing a downward sloping demand curve reacts to an increase in marginal cost by reducing output and increasing price . in this context, it is well understood that a requirement to pay a flat - rate royalty on sales has just this effect of increasing marginal cost and thereby decreasing output while simultaneously increasing price .however , the effect of permitting the royalty to take on more general forms leads naturally to non - standard cov problems , and explains why this question has remained unaddressed to date .recently the effect of piecewise linear cumulative royalty schedules on the optimal intertemporal production policy , , an optimal economics control problem that does not fit into the classical class of variational problems , has been formulated .the economics problem lies in the area of repayable launch investment ( rli ) . for the purposes of this paper we will outline just the mathematical nature of the problem since the precise ( nonlinear ) economic details are of secondary importance here . consider the system in the time domain modelled by the differential equation with the endpoint state value at time unknown .we wish to determine the control function for ] has an extremum .an initial condition is imposed on , but is free .suppose that ] .moreover , from an optimal control perspective one has where is the hamiltonian multiplier .theorem [ thm : mr ] asserts that the usual necessary optimality conditions ( the euler - lagrange equation or the pontryagin maximum principle ) hold for problem ( [ eq : p ] ) by substituting the classical transversality condition with consider an example that illustrates the new class of cov problem .it has the same form as the complicated nonlinear optimal intertemporal production policy problem . consider the ode system described by we wish to maximise = \int_0^t f(t , y(t),u(t),z)\ , dt\ ] ] where is a continuous function .the initial known state is and final state value is free . in this examplewe set .the hamiltonian is and function does not depend on , and for an optimum ( maximum in this example ) , the costate satisfies the stationarity condition is and this yields from ( [ az ] ) holds , , us consider the necessary conditions ( nc ) that need to be satisfied . for the system of odes ( [ a1 ] ) and ( [ a2 ] ) with control ( [ a3 ] ) ,the known zero initial condition and a guessed initial value , we need to ensure that the natural boundary condition ( [ a4 ] ) is satisfied .we need to solve the two point boundary value problem .also we need to iterate the value of used in ( [ a3 ] ) to ensure that in fact the value equals the value obtained for at .when one has obtained convergence regarding the values used in ( [ a3 ] ) and ( [ a4 ] ) , then nc is satisfied and we should have the optimal solution . use the newton shooting method with two guessed values and .we desire , and as specified by equation ( [ a4 ] ) .when the program obtains results with these two equations holding to a very high degree of accuracy , the necessary conditions nc hold and we should have the optimal solution .we have solved the shooting method problem using and the highly accurate numerical recipes library routines : \(i ) we integrate the system , , the system of odes ( [ a1 ] ) and ( [ a2 ] ) , and the results are , , , and perturbations of the optimal control by increasing and decreasing the value of at a single time instant yield smaller values .see figure 1 for results on state variable and control variable . a completely different approach , using a nonlinear programming technique , was also used .this technique may be useful for the actual piecewise constant economics problem .we solved this problem using euler and runge - kutta discretisation , and an optimisation algorithm to solve for the unknown control variables at each time instant .we computed the nonlinear programming problem using ampl with the minos solver and neos . using 40 time steps yields a good approximation very similar to the optimal resultsobtained using the precise approach here described .consider the ode system ( [ a1 ] ) and the associated optimal control problem described by ( [ eq : functional ] ) and ( [ eq : lagrangean ] ) .we set this as a minimization problem . from ( [ eq : functional ] ) define = \int_0^t g(t , y(t),u(t),z)\ , dt\ ] ] where with the final state value free , and .we now use the euler - lagrange equation ( [ eqn1 ] ) to find candidate solutions : for all $ ] . set and so since , using ( [ eqn1 ] )we can find by solving next result was obtained using : we find easily from using integration , , in or : where some comments : * the function is not defined for .however one can define for as * theorem [ thm : mr ] assumes . *from ( [ eq : f ] ) .if we have and we see that this is not the best solution .so and .* we must verify ( see in ( [ eq : yt ] ) ) so two cases are to be investigated : and . recall ( [ eqn2 ] ) : we have and using we arrive to integrating for the branch we obtain the left - hand side of ( [ eq : n7 ] ) is solving equation ( [ eq : n7 ] ) numerically we get and .the objective value is obtained using numerical integration over ( [ eq : j ] ) .this compares favorably with the result ( [ azj ] ) .note that similar calculations for the branch provide a worse solution with ( and ) .the results obtained here by symbolic algebra computations ( sac ) are in accordance and validate the numerical shooting solution obtained in section [ sec : na ] .[ fig_solution ] to problem ( [ a1])-([eq : functional ] ) obtained from both symbolic computation and the shooting method.,title="fig:",width=226 ] to problem ( [ a1])-([eq : functional ] ) obtained from both symbolic computation and the shooting method.,title="fig:",width=226 ]in this note we have shown how the standard necessary optimality conditions and numerical procedures for problems of the calculus of variations and optimal control should be adapted in order to cover lagrangians depending on the free end - point .the numerical techniques were validated with a simple sample example that allows symbolic calculations using a modern computer algebra system . in the actual optimal intertemporal production policy economics problemthe lagrangian may be piecewise continuous and this requires amended numerical techniques , such as nonlinear programming , for its solution .this numerical solution approach will be important for solution of the actual nonlinear economics problem .we thank the reviewers for their helpful comments .the first two authors were supported by the centre for research on optimization and control ( ceoc ) from the portuguese foundation for science and technology ( fct ) , cofinanced by the european community fund feder / poci 2010 .pontryagin , l. s. ; boltyanskii , v. g. ; gamkrelidze , r. v. ; mishchenko , e. f. _ selected works .4 . the mathematical theory of optimal processes _ , translated from the russian by k. n. trirogoff , translation edited by l. w. neustadt , reprint of the 1962 english translation , gordon & breach , new york , 1986 .
we study a new non - classical class of variational problems that is motivated by some recent research on the non - linear revenue problem in the field of economics . this class of problem can be set up as a maximising problem in the calculus of variations ( cov ) or optimal control . however , the state value at the final fixed time , , is _ a priori _ unknown and the integrand is a function of the unknown . this is a non - standard cov problem . in this paper we apply the new costate boundary conditions in the formulation of the cov problem . we solve some sample examples in this problem class using the numerical shooting method to solve the resulting tpbvp , and incorporate the free as an additional unknown . essentially the same results are obtained using symbolic algebra software . pedro a. f. cruz is assistant professor in the department of mathematics at the university of aveiro . he obtained a computational science msc from instituto superior tcnico and a phd in mathematics from the university of aveiro . he has published journal and conference works in statistics , optimization , and control theory . his main research interests are in the field of computational optimization . delfim f. m. torres is associate professor of mathematics at the university of aveiro ; scientific coordinator of the control theory group ( cotg ) ; editor - in - chief of ijms and ijamas . he received the _ licenciatura _ degree from the university of coimbra , and the msc and phd degrees from the university of aveiro . delfim f. m. torres has written more than 150 scientific and pedagogical publications , and held positions of invited and visiting professor in several countries in europe , africa , caucasus , and america . his research interests include several topics in the areas of the calculus of variations and optimal control . alan zinober is professor of nonlinear control theory in the department of applied mathematics at the university of sheffield . after obtaining the bsc and msc degrees at the university of cape town he was awarded the phd degree from the university of cambridge . he has been the recipient of a number of engineering and physical sciences research council and other research grants . he has published many journal and conference publications , and has edited three research monographs . the central theme of his research is in the field of sliding mode control and other areas of nonlinear control theory .
consider a maladapted population such as a bacterial colony in a glucose - limited environment , or a viral population in a vaccinated animal cell . in such harsh environments ,the less fit members of the population are likely to perish and only the highly fit ones can survive to the next generation . in this manner , the fitness of the population increases with time and the initially maladapted population evolves to a well - adapted state . in the last century, there has been a concerted effort to put this verbal theory of darwin on a solid quantitative footing by performing long - term experiments on microbial populations and studying theoretical models of biological evolution .one of the questions in evolutionary biology concerns the mode of evolution . in the experiments on microbes , it is found that the fitness of the maladapted population can increase with time in either a smooth continuous manner or sudden jumps .the latter mode is consistent with evolution on a fitness landscape defined on genotypic space with many local peaks separated by fitness valleys .on such a rugged fitness landscape , a low fitness population initially climbs a fitness peak until it encounters a local peak where it gets trapped since a better peak lies some mutational distance away . in a population of realistic size , it takes a finite time for an adaptive mutation to arise and the fitness stays constant during this time ( stasis ) .once some beneficial mutants become available , the fitness increases quickly as the population moves to a higher peak where it can again get stuck .such dynamics alternating between stasis and rapid changes in fitness go on until the population reaches the global maximum .this punctuated behavior of fitness is also seen in deterministic models that assume infinite population size .an example of such a step - like pattern for average fitness is shown in fig .[ avgq ] . a neat and unambiguous way of defining a step is by considering the fitness of the most populated genotype also shown in fig . [ avgq ] .since large but finite populations evolve deterministically at short times , it is worthwhile to study the punctuated evolution in models with infinite number of individuals . in this article, we will briefly describe some exact results concerning the dynamics of an infinitely large population on rugged fitness landscapes .we will find that the mechanism producing the step - like behavior is not due to `` valley crossing '' as in finite populations but when a fitter population `` overtakes '' the less fit one as described in the subsequent sections . and mutation probability .we consider an infinitely large population reproducing asexually via the elementary processes of selection and mutation .each individual in the population carries a binary string of length where or .the sequences are arranged on the multi - dimensional hamming space .the information about the environment is encoded in fitness landscape defined as a map from the sequence space into the real numbers and is generated by associating a non - negative real number to each sequence .fitness landscapes can be simple possessing some symmetry properties such as permutation invariance , or complex devoid of any such symmetries .fitness functions with single peak are an example of simple fitness landscapes while rugged landscapes with many hills and valleys belong to the latter class .the average population fraction with sequence at time follows mutation - selection dynamics described by the following discrete time equation ( , t+1)= .[ quasi ] the last two factors in the numerator of the above equation give the population fraction when a sequence copies itself with replication probability since fitness is defined as the average number of offspring produced per generation .after the reproduction process , point mutations are introduced independently at each locus of the sequence with probability per generation .thus , a sequence is obtained via mutations in with probability p_ = ^d(, ) ( 1 - ) ^l - d(, ) [ mut ] where the hamming distance is the number of point mutations in which the sequences and differ .the denominator of ( [ quasi ] ) is the average fitness of the population at time which ensures that the density is conserved .the stationary state of the quasispecies equation ( [ quasi ] ) has been studied extensively in the last two decades for various fitness landscapes .these numerical and analytical studies have shown that for most landscapes , there exists a critical mutation rate below which the population forms a quasispecies consisting of fittest genotype and its closely related mutants while above it , the population delocalises over the whole sequence space .this _ error threshold _ phenomenon can be easily demonstrated for a single peak fitness landscape defined as w()=w_0 _ , _0+(1-_,_0 ) , w_0 > 1 where is the fittest sequence . in the limit keeping fixed , the frequency of the fittest sequence in the steady state of ( [ quasi ] ) is given by ( _ 0)= which is an acceptable solution provided .for , selection is unable to counter the delocalising effects of mutation and the population can not be maintained at the fitness peak .for a discussion of error threshold phenomenon on other fitness landscapes and generalisations of the basic quasispecies equation ( [ quasi ] ) , we refer the reader to .we now turn our attention to the dynamical evolution of on rugged fitness landscapes .we consider maximally rugged fitness landscapes for which the fitness is a random variable chosen independently from a common distribution .it is useful to introduce the unnormalised population defined as ( , t)=x(,t ) _= 0^t-1 _ w( ) x(,t ) in terms of which the nonlinear evolution ( [ quasi ] ) reduces to the following linear iteration ( , t+1)= _ ^ p_^ w( ) z(,t ) .linear since at the beginning of the adaptation process the population finds itself at a low fitness genotype , we start with the initial condition where is a randomly chosen sequence . for mutation probability , after one iterationwe have ( , 1 ) ~^d(,^(0 ) ) w(^(0 ) ) .one - gen thus in an infinite population model , each sequence gets populated in one generation obviating the need for `` valley crossing '' which is required for finite populations . although an exact solution of ( [ linear ] ) for is not available , it is possible to obtain several asymptotically exact results concerning the most populated genotype using a simplified version of the quasispecies dynamics .numerical simulations of showed that dynamical properties involving the most populated genotype are well described by a simplified model which approximates the population in ( [ linear ] ) by ( , t ) ~^d(,^(0 ) ) w^t ( ) , t > 1 .shell this model ignores mutations once each sequence has been populated and allows the population at each sequence to grow with its own fitness . however , a recent perturbative analysis in the small parameter shows that this approximation holds for highly fit sequences and at short times .writing and rescaling time by in ( [ shell ] ) , we find that the logarithmic population obeys the following linear equation : e(,t)=-d(,^(0 ) ) + f ( ) t .model the linear evolution of the ( logarithmic ) population of sequences for is shown in fig .[ talklines]a .since the initial population fraction given by ( [ one - gen ] ) is same for all the sequences at constant hamming distance from , lines are seen to emanate from the same intercept .however as the genotype with the largest slope ( fitness ) at constant intercept has the potential to become the most populated sequence , we arrive at the model in fig .[ talklines]b in which genotypes are retained , each of whose fitness is an independent but non - identically distributed variable .defined by ( [ model ] ) for .the bold lines have the largest fitness amongst the fitnesses at distance from the origin .( b ) evolutionary race : the sequence at distance is the most populated sequence ( winner ) while the one at distance is a record ( contender).,title="fig : " ] defined by ( [ model ] ) for .the bold lines have the largest fitness amongst the fitnesses at distance from the origin .( b ) evolutionary race : the sequence at distance is the most populated sequence ( winner ) while the one at distance is a record ( contender).,title="fig : " ] in a sequence of random variables , a _ record _ is said to occur at if for all . in fig .[ talklines]b , the sequences at distance and from the initial sequence are records but the sequence at does not become a most populated genotype . in order to qualify as a _ jump _ , it is not sufficient to have a record fitness ; the population should also be able to overtake the current winner in minimum time .due to the overtaking time minimization constraint , the records and jumps have different statistical properties which we describe briefly in the next subsections . although the record statistics for independent and identically distributed ( i.i.d .) random variables is well studied , much less is known when the variables are not i.i.d. . herewe have a situation in which is a maximum of i.i.d . random variables .however , since the record fitness is the largest amongst i.i.d .variables and there are ways of choosing it , the probability that the fitness is a record is given by _k= , k < l/2 .ptildeshell the meaning of the above distribution is intuitively clear : as it is easier to break records in the beginning , the probability to find a record is near unity for and it vanishes beyond because the global maximum typically occurs at this distance .the average number of records can be obtained by simply integrating over to yield .it is also possible to find the typical spacing between the and record where we have labeled the last record ( i.e. global maximum ) as .a straightforward calculation shows that the typical inter - record spacing falls as a power law given by ( j ) , j 1 .interrecshell the above expression indicates that the spacing between the last few records ( i.e ) is of order , while most of the records are crowded at the beginning which is consistent with the behavior of the record occurrence probability ( [ ptildeshell ] ) .the calculation of jump statistics is more involved than that of records because a jump event requires a minimization of the overtaking time .this constraint imposes a condition on the fitnesses of the squences that can possibly overtake the current leader in a time interval between and . the sequence at distance can overtake the one ( with fitness ) at time if the fitness and at time , if f(k)== f+- dt+o(dt^2 ). then the total collision rate with which the sequence is overtaken by the one is given as w_k,k(f , t ) p_k ( f+ ) , k > k coll where is the distribution of the maximum of i.i.d .random variables distributed according to with support over the interval $ ] .using this collision rate , we can write the probability that the sequence at distance overtakes the one at time as _ k,k(t)= _ f_^f _ df w_k,k(f , t ) p_k(f , t ) [ basic ] where the probability that the sequence has the largest population at time is given by p_k(f , t)=p_k(f ) _ ^l _ f_^f+ df p_j(f ) . max note that unlike the records , the jump properties depend on the underlying distribution of the random variables . below we present some results when the distribution . integrating ( [ basic ] ) over time , the probability distribution that sequence is overtaken by sequence is obtained , p_k,k ( ) e^- , k < k < l/2 .this form of the distribution implies that the overtaking sequence is located within distance of the overtaken sequence .thus the typical spacing between successive jumps for large is roughly constant and goes as unlike in the case of records discussed in the last subsection .the jump distribution for a jump to occur at distance is obtained by integrating over and we have p_k _ h ( -k ) where is the heaviside step function which takes care of the fact that the record distribution ( [ ptildeshell ] ) vanishes at distance . instead of integrating over time , by summing over the space variables in ( [ basic ] ) , the probability that a jump occurs at time can be obtained and is given by p(t)= ( ) .jt - shell the heavy tail distribution can be understood using a simple argument and implies that mean overtaking time is infinite .finally , by either summing over or integrating over time , the total number of jumps are found to be which is much smaller than the number of records .in this article , we discussed the steady state and the dynamics of the quasispecies model which describes a self - replicating population evolving under mutation - selection dynamics .on rugged fitness landscapes , the population fitness increases in a punctuated fashion and we described several exact results concerning this mode of evolution .our recent simulations indicate that the law in ( [ jt - shell ] ) for the deterministic populations also holds for finite stochastically evolving populations . at present, we do not have an analytical understanding of the latter result but it should be possible to test this law in long - term experiments such as those of on _ e. coli_. acknowledgements : i am very grateful to prof .j. krug for introducing me to the area of theoretical evolutionary biology .i also thank the organisers of the statphys conference at iit , guwahati for giving me an opportunity to present my work .
we study the adaptation dynamics of an initially maladapted population evolving via the elementary processes of mutation and selection . the evolution occurs on rugged fitness landscapes which are defined on the multi - dimensional genotypic space and have many local peaks separated by low fitness valleys . we mainly focus on the eigen s model that describes the deterministic dynamics of an infinite number of self - replicating molecules . in the stationary state , for small mutation rates such a population forms a _ quasispecies _ which consists of the fittest genotype and its closely related mutants . the quasispecies dynamics on rugged fitness landscape follow a punctuated ( or step - like ) pattern in which a population jumps from a low fitness peak to a higher one , stays there for a considerable time before shifting the peak again and eventually reaches the global maximum of the fitness landscape . we calculate exactly several properties of this dynamical process within a simplified version of the quasispecies model . # 1 ( [ eq:#1 ] ) # 1#1
since the seminal work of black and scholes , who drew an analogy between the random motion of microscopic particles and the unpredictable evolution of stock prices , methods from theoretical physics have proved very useful for pricing various financial derivative products .the pricing of derivative products is based on a model for the evolution of the probability function of the underlying asset . in order for a model to describe the economic reality accurately ,a sufficiently general evolution for the probability distribution has to be allowed for . nevertheless , the simple diffusion model of black and scholes ( bs ) is still widely used .much of its success is due to the availability of closed - form analytical pricing formulas for many types of derivatives .it is known for a long time that the bs model is only a crude approximation to the economic reality and that its assumptions are violated in actual markets .perhaps the most illustrative violation is that the volatility implied from traded vanilla options , the implied volatility is not constant across strikes and maturities .examples of models that tackle such violations are local volatility processes , jump processes , lvy processes and stochastic volatility models . a stochastic volatility model that has been particularly successful at explaining the implied volatility smile in equity and foreign exchange markets is the heston model .in his seminal paper , heston derived a closed form solution for the price of a vanilla option , which enables a quick and reliable calibration to market prices , especially for liquidly traded vanilla options with maturities between 2 months and 2 years .contrary to the black - scholes model , to date in the heston model no closed - form analytic formulas have been found for exotic options ( for recent results see ) . since no such formulas are available in the literature for any but the simplest payoffs , often costly numerical techniques have to be used ( see and references therein ) .the original mathematical solution of the option pricing problem was formulated within the framework of partial differential equations , but an equivalent description with path integral methods was developed in the pioneering work by linetsky and dash .they showed that path dependent exotic options can be straightforwardly priced with the path integral method .this should be intuitively clear : in the path integral formalism , a probability is assigned to every evolution path of the asset . in the formulation with partial differential equations , such quantities are typically difficult to access .path integral methods have also been used in the pricing of options within stochastic volatility models and in the related problem of non - gaussian diffusion ( at the end of sec .[ pir ] we come back to this connection ) , but to the best of our knowledge no explicit option pricing formula as cheap to evaluate as stein and stein s or heston s formulas have yet been derived using path integrals .we will show in the present paper how to carry out this task for the heston model .the result we thereby obtain corresponds to the existing result for which the calibration and correspondence to market data has already been investigated see for example . for a thorough discussion onwhen which approach should be used we refer to and references herein .it is also known that there are still important features of asset price distributions which are absent in the heston model for example : empirical studies of time series provide evidence of the long time memory of volatility . since models containing a memory effect through retarded interaction , for example in the context of polarons , , have been solved within a path integral framework , we think our method can prove to be useful in more realistic models for the market also .the full power of the path integral method becomes clear , when we exploit its flexibility by calculating the price of an option in a setting where not only the volatility but also the interest rate is stochastic and follows the widely used cir model . to the best of our knowledge , no exact closed - form formula for this problem is available .therefore , we have checked our formulas against a monte carlo simulation .the plan of the paper is as follows . in sec .[ pir ] , we outline our model , which is the one introduced by heston . extensions of our method to different models are however straightforward . further in this sectionwe derive a closed - form solution for the time evolution of the asset price . in sec . [sec : pivan ] we present a closed - form pricing formula for plain vanilla options which only involves one numerical integration of a compilation of elementary functions . in sec .iii we will extend the heston model to include stochastic interest rate , in sec .iii a we present a closed - form solution for the vanilla option price which still contains only on numerical integration of a compilation of elementary functions . in sec .[ trosir ] we test this result with a monte carlo method and discuss the relevance of including stochastic interest rate .conclusions are drawn in sec .we will concentrate on assets following a diffusion process described by the following two equations introduced by heston here is the asset price , is a constant drift factor , is the variance of the asset , is the spring constant of the force that attracts the variance to its mean reversion level (also called the mean reversion speed ) , is the volatility of the variance , and and are independent wiener processes with unit variance and zero mean .the asset price follows a black - scholes process , whereas the volatility obeys a cox - ingersoll - ross process .there are two general approaches to determine the price of an option in a path integral context .one could , based upon equations ( [ hest1a ] ) , ( [ hest1 ] ) determine the probability distribution for the asset price at the strike time conditional on the values of the asset and the variance at the present time .the expectation value of the option price at time can be calculated by integrating the gain you make with a certain outcome of multiplied by the probability of obtaining that outcome over all possible values of . to obtain the present value of the price one then discounts this expectation value with the risk free interest rate . for a european call optionthis can be written as: p_{s}\left ( s_{t},v_{t}\mid s_{0},v_{0}\right ) .\label{jecpf1}\ ] ] we will refer to this approach as the `` _ _ asset propagation approach _ _ '' since is the propagator for a distribution of asset prices ( and volatilities ) . the other approach focuses on the option price rather than the asset evolution , as will be referred to as the `` _ _ option propagation approach _ _ '' . in his paper [ 6 ] , heston discusses the subtle differences between the asset point of view and the option price point of view , and this discussion is also relevant to the present path - integral framework .heston motivates that the time evolution of the option price is governed by the following partial differential equation ( pde): -\lambda v\right\ } \frac{\partial u}{\partial v}-\frac{1}{2}vs^{2}\frac{\partial^{2}u}{\partial s^{2}}-\rho\sigma vs\frac{\partial^{2}u}{\partial s\partial v}-\frac{1}{2}\sigma ^{2}v\frac{\partial^{2}u}{\partial v^{2 } } , \label{hestssde}\ ] ] where is a parameter introduced on the basis of no - arbritage arguments and setting up a risk - free portfolio .if one makes the substitution one obtains the following pde for as a function of the asset price and the volatility: -\lambda v\right\ } \frac{\partial v}{\partial v}-\frac{1}{2}vs^{2}\frac{\partial^{2}v}{\partial s^{2}}-\rho\sigma vs\frac{\partial^{2}v}{\partial s\partial v}-\frac{1}{2}\sigma ^{2}v\frac{\partial^{2}v}{\partial v^{2}}. \label{hestssde2}\ ] ] based on this pde , one can find a kernel that propagates a given final distribution backwards to the present value of the option .since the value of the option at the final time is known , ] , is given by :=\frac{2}{\sigma^{2}}\left\ { \dot{z}-\frac{1}{2}\left [ \frac{1}{z}\left ( \kappa\theta-\frac{\sigma^{2}}{4}\right ) -\kappa z\right ] \right\ } ^{2}-\frac{1}{4z^{2}}\left ( \kappa\theta-\frac{\sigma^{2}}{4}\right ) -\frac{\kappa}{4 } , \label{cird}\ ] ] the first step in the evaluation of eq .( [ pimooi ] ) is the integration over all -paths .because the action is quadratic in this integration can be done analytically and yields \right\ } .\label{eq : pi1b}\ ] ] note that the probability to arrive in only depends on the average value of the volatility along the path : , in agreement with ref . . however , this average value appears in the denominator of the third term , and to perform the functional integral one needs to bring this into the numerator .this is achieved by rewriting part of the expression ( [ eq : pi1b ] ) as follows: = \int_{-\infty}^{+\infty}\frac{dk}{2\pi}\exp\left [ i\left ( y_{t}-y_{0}\right ) k-\frac{\int z^{2}dt\left ( 1-\rho^{2}\right ) } { 2}k^{2}\right ] .\label{eq : gidb}\ ] ] combining eqns .( [ eq : pi1b ] ) and ( [ eq : gidb ] ) and making the substitution the transition probability becomes \int\mathcal{d}z(t)\label{eq : pi2b}\\ \times & \exp\left ( -\int_{0}^{t}dt\left\ { \mathcal{l}_{cir}[z(t)]+\frac { 1}{2}z^{2}\left [ \left ( 1-\rho^{2}\right ) l^{2}+2li\left ( \frac{\rho } { \sigma}\kappa-\frac{1}{2}\right ) \right ] \right\ } \right ) .\nonumber\end{aligned}\ ] ] the path integral over the cir action is formally equivalent to the exactly solvable radial harmonic oscillator and , fortunately , adding terms proportional to to the action does not spoil this equivalence .the full path integral over can be carried out without approximations with the following result: \nonumber\\ \times & \int\limits_{-\infty}^{+\infty}\exp\left [ i\left ( y_{t}-y_{0}\right ) l\right ] \sqrt{z_{0}z_{t}}\frac{4\omega}{\sigma^{2}\sinh\left ( \omega t\right ) } \nonumber\\ \times & \exp\left [ -\frac{2\omega}{\sigma^{2}}\left ( z_{0}^{2}+z_{t}^{2}\right ) \coth\left ( \omega t\right ) \right ] i_{\frac{2}{\sigma^{2}}\kappa\theta-1}\left [ \frac{4\omega z_{0}z_{t}}{\sigma^{2}\sinh\left ( \omega t\right ) } \right ] dl .\label{napad2}\ ] ] where is the -dependent frequency of the radial harmonic oscillator that corresponds to the cir lagrangian ( [ cird ] ) . after transforming back to the variablewe see that also the integral over the final value can be done analytically ( see e.g. ) , yielding the marginal probability distribution ( written in the original variable ) as a simple fourier integral: \nonumber\\ & \times\int\limits_{-\infty}^{+\infty}n^{\frac{2}{\sigma^{2}}\kappa\theta } \exp\left\ { i\left [ x_{t}+\frac{\rho}{\sigma}\left ( v_{0}+\kappa\theta t\right ) \right ] l\right .\nonumber\\ & \left .-\frac{2\omega}{\sigma^{2}\sinh\left ( \omega t\right ) } \left [ \cosh\left ( \omega t\right ) -n\right ] v_{0}\right\ } dl , \label{eq : pfinc}\ ] ] where n is : note the similarity of the expression ( [ eq : pfinc ] ) with the result obtained in ref . , derived for a general stochastic process with non - gaussian noise . from now onwe follow the option propagation approach and set equal to . the price of a call option with expiration date and strike when the transition probability is known is given by eq .( [ jecpf1 ] ) . writing this formula in the variable and thereby inserting the result ( [ eq : pfinc ] ) for the transition probability results in: \mathcal{p}\left ( x_{t}\mid 0,v_{0}\right ) , \label{eq : pricep}\ ] ] where the risk free interest rate was restored and denoted by .now there are still two numerical integrations that have to be done . following the derivation outlined in ref . we can rewrite expression ( [ eq : pricep ] ) so that only one numerical integration remains: \right . \label{resvop}\\ & \left .\times\left [ s_{0}\exp\left ( \theta-\frac{\rho}{\sigma}a\right ) -e^{-rt}k\exp\left ( \upsilon\right ) \right ] -s_{0}+e^{-rt}k\right\ } \frac{dl}{2\pi},\nonumber\end{aligned}\ ] ] with ^{-1},\\ \theta & = \frac{2\nu v_{0}}{\sigma^{2}\sinh\left ( \nu t\right ) } \left [ m-\cosh\left ( \nu t\right ) \right ] + \frac{2}{\sigma^{2}}\kappa\theta\log m,\\ \upsilon & = \frac{2\omega v_{0}}{\sigma^{2}\sinh\left ( \omega t\right ) } \left [ n-\cosh\left ( \omega t\right ) \right ] + \frac{2}{\sigma^{2}}\kappa\theta\log n. \label{laatst}\ ] ] and defined as before ( [ denomega ] ) .we have tested this result against the formula stated in ref .this confirmed the correctness of formula ( [ resvop ] ) .now we are confident to explore new grounds with our method in the following section .in the previous section we assumed the interest rate to be constant . here we allow the interest rate to change in time , . applying black and scholes no - arbitrage argument on heston s risk - free portfolio motivation for the evolution of the option price ,we again obtain the partial differential equation ( [ hestssde2 ] ) with rather than a constant -\lambda v\right\ } \frac{\partial v}{\partial v}-\frac{1}{2}vs^{2}\frac{\partial^{2}v}{\partial s^{2}}-\rho\sigma vs\frac{\partial^{2}v}{\partial s\partial v}-\frac{1}{2}\sigma ^{2}v\frac{\partial^{2}v}{\partial v^{2}}\ ] ] for a given function this leads to a kernel $ ] so that the option price becomes=\int\limits_{-\infty}^{+\infty}ds_{t}dv_{t}\max\left [ s_{t}-k,0\right ] \text { } e^{-\int r(t)dt}p_{v}\left [ s_{t},v_{t}\mid s_{0},v_{0}\mid r(t)\right ] .\ ] ] note that the option price is now a functional of the given time evolution of the interest rate .as in the previous section , it is convenient to introduce new integration variables , \\z(t ) & = \sqrt{v(t)}.\end{aligned}\ ] ] in the path - integral treatment , the kernel can be written as a sum over all possible realizations of and , weighed by the action functional of the system : & = \int\limits_{-\infty}^{+\infty}dx_{t}dv_{t}\max\left [ e^{x_{t}}-k,0\right ] \text { } e^{-{\textstyle\int\nolimits_{0}^{t } } r(t)dt}\nonumber\\ & \times\int\mathcal{d}y\mathcal{d}z\text { } \exp\left ( -\int\limits_{0}^{t}\left\ { \mathcal{l}_{q}\left [ y(t),z(t),r(t)\right ] + \mathcal{l}_{cir}[z(t)]\right\ } dt\right ) , \end{aligned}\ ] ] where is the quadratic lagrangian ( [ elkuu ] ) ^{2},\ ] ] and is the cir lagrangian . of course, we can not know what particular realization of the interest rate will appear in the future .we assume the interest rate to follow a cir process which is uncorrelated from the other two stochastic processes, the value for the option price then needs to be averaged over the realization of in this cir process . where the calculation of the expectation value of such a functional might become cumbersome with conventional probabilistic techniques , it can be evaluated very elegantly with the feynman - kac formula : \right\rangle = \int\mathcal{d}r\text { } \mathcal{c}[r(t)]\exp\left ( -\int\limits_{0}^{t}\mathcal{l}_{cir}[r(t)]dt\right ) , \ ] ] where is the lagrangian for the cir process .the final result can be expressed with a modified propagator as p(s_{t},v_{t},r_{t}\mid s_{0},v_{0},r_{0 } ) , \label{blab}\ ] ] with + \mathcal{l}_{cir}[z(t)]+\mathcal{l}_{cir}[r(t)]\right\ } dt\right ) .\end{aligned}\ ] ] the stochastic interest rate makes the vanilla price dependent on the specific path followed by the interest rate .this part of the payoff has been taken into the calculation of the propagator , where it is analytically tractable , and no longer appears explicitly in the expression ( [ blab ] ) for the option price .herein lies the strength of the path - integral approach , to price path - dependent options . with a stochastic interest ratethe european vanilla option becomes dependent on the entire path of the interest rate and is still solved in a very straightforward way .this is promising for more general option types , such as the barrier and asian options that we are currently investigating .a useful substitution to perform the functional integrations is as was the case for the lagrangian corresponding to the volatility , the lagrangian corresponding to the interest rate process will also be formally equivalent to the lagrangian corresponding to a radial harmonic oscillator ; furthermore the addition of another term quadratic in stemming from the discount factor does nt spoil the correspondence .the result reads as follows: \nonumber\\ & + i\int\limits_{-\infty}^{\infty}\frac{1}{l}\left\ { k\exp\left [ \upsilon_{r}\left ( 0\right ) + \frac{\kappa_{r}}{\sigma_{r}^{2}}a_{r}\right ] \right . -s_{0}+\exp\left[ i\left ( \frac{\rho}{\sigma}a+x_{e}\right ) l+\frac{\kappa}{\sigma^{2}}a+\frac{\kappa_{r}}{\sigma_{r}^{2}}a_{r}\right ] \nonumber\\ & \left .\times\left [ s_{0}\exp\left ( -\frac{\rho}{\sigma}a+\theta + \theta_{r}\right ) -k\exp\left ( \upsilon+\upsilon_{r}\right ) \right ] \right\ } \frac{dl}{2\pi}. \label{prmr2}\ ] ] to make it surveyable , we introduced the following notations ^{-1},\\ \theta_{r } & = \frac{2\nu_{r}r_{0}}{\sigma_{r}^{2}\sinh\left ( \nu _ { r}t\right ) } \left [ m_{r}-\cosh\left ( \nu_{r}t\right ) \right ] + 2\frac{\kappa_{r}\theta_{r}}{\sigma_{r}^{2}}\log m_{r},\\ \upsilon_{r}\left ( l\right ) & = \frac{2\omega_{r}\left ( l\right ) r_{0}}{\sigma_{r}^{2}\sinh\left [ \omega_{r}\left ( l\right ) t\right ] } \left\ { n_{r}\left ( l\right ) -\cosh\left [ \omega_{r}\left ( l\right ) t\right ] \right\ } + 2\frac{\kappa_{r}\theta_{r}}{\sigma_{r}^{2}}\log n_{r}\left ( l\right ) .\end{aligned}\ ] ] these notations reflect the extension to the case of stochastic interest rate ( symbols with subscript ) of the corresponding quantities in the heston model ( equations ( [ eerst])-([laatst ] ) ) .notice the resemblance with formula ( [ resvop ] ) .formula ( [ prmr2 ] ) still contains just one numerical integration with an integrand composed out of elementary functions . to the best of our knowledge ,only approximate analytical formulae are available when both the volatility and interest rate are stochastic . because of the lack of alternative exact analytical expressions , we have checked the validity of our formula ( [ prmr2 ] ) against numerical monte carlo simulations .our monte carlo method is outlined below . first notice that substitutions ( [ subsngv ] ) transform the -variable into a variable , independent of the interest rate by subtracting the time averaged interest rate : .this results in the same equation as in the constant interest rate situation , eq .( [ hest3a ] ) .also the discount factor only contains .this means that the knowledge of the probability distribution is sufficient to calculate the price by means of the formula ( [ resvop ] ) derived in the constant interest rate setting .so the monte carlo scheme used is the following : first values for are simulated and used to calculate the option price for these values , next the price is averaged over all the simulations .a value for is simulated as follows : time is discretized in little time steps , we sample a path for and integrate along this path . to calculate the probability distribution for , we used the result that the stochastic time increment of a cir variable over a small time step follows a non - central distribution .the probability distribution of the average interest rate is then simulated by generating many -paths in discretized time . as shown in fig .[ prentsti5 ] , the agreement between the analytical ( thick full line ) and numerical option prices is excellent . in this sectionthe option propagation approach was followed from the beginning . in this setting itis necessary to make a choice between the two approaches from the start because in the asset propagation approach one would actually have to introduce a stochastic process for the drift instead of for the interest rate . that these two should follow the same stochastic process is not clear .since the option propagation approach is the most common one anyway we followed this approach .if one does want to introduce a stochastic process for the drift this poses no problem and the derivation of an option price in this setting would be completely similar . in the current treatment , we have two layers of generalization as compared to the black - scholes result .first , the volatility appearing in the black - scholes process is stochastic this leads to the heston model .second , the interest rate of the black - scholes model is also stochastic leading to our present results . in this paragraph, we argue that both improvements can have an equally important effect on the option price .[ t ] figure1.eps this is illustrated in fig .[ prentsti5 ] , where the different approaches are compared .let s start with the most complete model , where both interest rate and volatility are stochastic .the resulting option price , eq .( [ prmr2 ] ) , for this model is shown as a thick red curve .the result from the closed - form expression agrees well with the monte carlo simulation , shown as crosses .now we strip off one layer of complexity , and fix the interest rate it is no longer a stochastic variable .then we obtain the heston model as an ` approximation ' to a stochastic interest rate world .the question poses itself of which fixed interest rate to use , if we want to make the comparison .two choices are shown in fig .[ prentsti5 ] : and .the former choice ( dotted blue curves ) sets the heston interest rate equal to the interest rate at time , whereas the latter choice ( dash - dotted curves ) sets the heston interest rate equal to the mean reversion level . for the parameter values used in fig .[ prentsti5 ] , the most complete result lies between the two heston ` approximations ' , but this is not necessarily so .[ figje2 ] shows that for some choices of other ( realistic ) parameters , the full result can lie outside both heston approximations . nevertheless ,as becomes very large , the stochastic interest rate will be drawn very tightly to the mean reversion rate , and one expects the full result to be near the heston approximation with .when is very small , the stochastic interest rate will not be drawn quickly towards so that when also is small , the full results will be near the heston approximation with .next , we strip off the second layer of approximation , and also fix the volatility .this results in the familiar black - scholes model as the crudest approximation to our system .now a second choice has to be made : which value of the volatility to use . here, we take the stochastic volatility at time zero to be equal to the mean reversion level of the volatility cir process , so that the ambiguity of choice is avoided .the choice for what interest rate to use , however , remains . in fig .[ prentsti5 ] , we show the black - scholes results with ( dashed line ) and ( full line ) .we have plotted all the results relative to the black - scholes result with to emphasize the differences rather than the absolute magnitude of the prices ( for this reason , the black - scholes result is the baseline of the plots ) .the difference between the three panels of fig .[ prentsti5 ] is the value of the correlation between asset price and volatility .[ ptb ] figure2.eps from figs .[ prentsti5 ] and [ figje2 ] , it is clear that both levels of approximation ( keeping the volatility constant and keeping the interest rate constant ) have an equally large effect on the option price . even within the heston framework ,the choice of what value to use for the interest rate is seen to influence the price considerably .choosing a different interest rate , or keeping the interest rate as a stochastic variable , leads to a price correction that is as large as the price correction obtained by going from the black - scholes to the heston model .this result emphasizes the importance of a correct treatment of the interest rate in pricing models ( this also depends strongly on the length of the lifetime of the option ) .finally we must remark that the price differences when working within the standard heston model or within the extended one can be influenced by the calibration method . for figures .[ prentsti5 ] and [ figje2 ] we used the same parameters for the volatility process both in the standard model and in the extended one , parameter values for the interest rate process are calibrated separately .literature shows that the parameter values for the volatility process ( see for example and ) and the interest rate process ( see for example and ) can attain values in a broad range containing the values we chose to produce fig .[ prentsti5 ] and fig .[ figje2 ] .however if the parameter values obtained for the interest rate process are used in formula ( [ prmr2 ] ) to calibrate the remaining parameter values for the volatility process one might get different results .we can not exclude that this calibration approach would lead to smaller price differences between the two approaches .however such a calibration is a research area on its own and is outside the scope of this article .we have developed a path - integral method to derive closed - form analytical formulas for the asset price distribution in the heston stochastic volatility model .closed - form formulas are obtained for the logreturn of the derivative and the vanilla option price .the presented results correspond to the known semi - analytic results obtained from solving the partial differential equation by standard techniques .the flexibility of our approach is demonstrated by extending the results to the case where the interest rate is a stochastic variable as well , and follows a cir process .for this case , to the best of our knowledge , no exact analytical solutions have been derived before .we have checked our semi - analytical results for the model with both stochastic volatility and stochastic interest rate against a monte - carlo simulation .the quantitative analysis shows that the effect of stochastic interest rate on the heston model can be as large as the effect of the stochastic volatility on the black - scholes model .however we did not perform a full calibration , which might influence the results . finally , the analogy between stochastic interest rate models and path dependent options makes our method promising for the pricing of exotic derivative products .acknowledgments discussions with l. lemmens , i. de saedeleer , k. int hout and e. boksenbojm are gratefully acknowledged .this work is supported financially by the fund for scientific research - flanders , fwo project g.0125.08 .j. t. and d. l. gratefully acknowledge support of the special research fund of the university of antwerp , bof noi ua 2007 .
we present a path integral method to derive closed - form solutions for option prices in a stochastic volatility model . the method is explained in detail for the pricing of a plain vanilla option . the flexibility of our approach is demonstrated by extending the realm of closed - form option price formulas to the case where both the volatility and interest rates are stochastic . this flexibility is promising for the treatment of exotic options . our new analytical formulas are tested with numerical monte carlo simulations .
the role of uavs ( unmanned aerial vehicles ) has gained significant importance in the last decades .they have many advantages ( agility , low surface area , ability to work in constrained or dangerous places ) over their conventional precedents .in addition , current uavs are more biologically - inspired in terms of shape and performance because of the improvements in electronics and propulsion .unfortunately , we are still far away from using their capacity at the fullest .this is mostly related with the weakness of current control algorithms against high - dimensional and nonlinear environments . in this sense , generating aggressive maneuvers is interesting and hard to accomplish . in this paper , our approach to solve this issue is designed in view of the experiments on frogs and monkeys which suggest that we are faced with an inverse - kinematics algorithm that adapts to the environment and changes in a sequence of target points irrespective of the initial conditions . in theory , we analyzed dynamic movement primitives ( dmps) and combined them using contraction theory . in experiments ,obstacle avoidance dmp of a human - piloted flight data is segmented into parts and combined at different initial points to achieve maneuvers against different obstacles on different locations .background of our work is briefly detailed below .`` by three methods we may learn wisdom : first , by reflection , which is noblest ; second , by imitation , which is easiest ; and third , by experience , which is the most bitter . ''( confucius ) imitation takes place when an agent learns a behavior by observing the execution of that behavior from a teacher .imitation is not inherent to humans .it is also observed in animals .for example , experiments show that kittens exposed to adult cats manipulate levers to retrieve food much faster than the control group .there has been a number of applications on imitation learning in the field of robotics .studies on locomotion , humanoid robots , , , and human - robot interactions have used imitation learning or movement primitives .the emphasis on these studies is on primitive derivation and movement classification ; combinations of the primitives and primitive models in order to extract behaviors . aggressive control of autonomous helicopters represents a challenging problem for engineers .the challenge owes itself to the highly nonlinear and unstable nature of the dynamics along with the nonlinear relations for actuator saturation .nevertheless , we can find successful unmanned helicopter examples in the literature .however , model helicopters controlled by humans can achieve considerably more complex and aggressive maneuvers compared to that can be done autonomously with the state of the art . in , it is observed that after several repetitions of the same maneuver , performed by a human , generated trajectories are similar and the control inputs are well - structured and repetitive .hence , it is intuitive to focus on understanding human s maneuvers to find proper algorithms for unmanned control . in their experiment with deafferented and intact monkeys ,bizzi found that a certain movement can be executed regardless of initial conditions , emphasizing the importance of feedback control .in particular , they have shown that the control variable is the equilibrium state of the agonist and antagonist muscles .same experimental setup is again used to characterize the trajectory of the motion in .their results additionally suggest that movement called `` virtual trajectory '' is composed of more than one equilibrium point and central nervous system uses the stability of the lower level of the motor system to simplify the generation of movement primitives .bizzi and mussa - ivaldi s experiments on frogs provide us with further clues in understanding movement primitives .they microstimulated spinal cord and measured the forces at the ankle .having repeated this process with ankle replaced at nine to 16 locations , they observed that collection of measured forces always converges to a single equilibrium point . in their model ,inverse kinematics plays a crucial role in achieving the endpoint trajectory ( see mussa - ivaldi ) .this section outlines the analysis of the dmp algorithm using contraction theory .dmp is a trajectory generation algorithm which interpolates between the start and end points of a path based on learning .the system can be represented by where , and characterize the desired trajectory , and are time constants , is a temporal scaling factor , is the desired end point .in addition , the canonical system is given by in general , assuming that the -function is zero , system will converge to exponentially .the goal of the dmp algorithm is to modify this exponential path so that the -function makes the system non - linear and allows us to generate desired trajectories between the origin and the point .the -function is a normalized linear combination of gaussians which helps to approximate the final trajectory , i.e. it has the general form where the dmp algorithm can also be extended to the rhythmic movements by changing the canonical system with the following : where corresponds to in eq .[ eq:4 ] as a temporal variable .similar to the discrete system , control policy : + where is a basis point for learning and ^t$ ] .learning aspect of the algorithm comes into play with the computation of the weights ( ) of the gaussians .weights are derived from eq.[eq:1 ] and eq.[eq:2 ] using the training trajectory and as variables .once the parameters of the -function are learned , then dmp can simply be used to generate the original trajectory . as detailed below, spatial and temporal shifts are achieved by adjusting the and respectively . *_ spatial adjustments : _ the first system [ eq.([eq:1 ] ) , eq.([eq:2 ] ) ] can be seen as a linear system .it is due to the fact that variable in -function is only multiplied by time - varying constant .hence , we can say that output ( ) is simply scaled by from superposition . * _ temporal adjustments : _ the second system [ ( eq.([eq:4 ] ) eq.([eq:5 ] ) ] is simply linear .in addition , f function is linear because the multiplier is a time - varying constant , temporally scaled by .thus , from linearity , we can say that temporal adjustments of the whole system is carried out by just changing the variable .these arguments can also be extended to the rhythmic dmps for modulations .the basic theorem of contraction analysis is stated as * * theorem ( contraction)**__consider the deterministic system _ _ _ where is a smooth nonlinear function . if there exist a uniformly invertible matrix associated generalized jacobian matrix _ _ is uniformly negative definite , then all system trajectories converge exponentially to a single trajectory , with convergence rate , where the largest eigenvalue of the symmetric part of f. the system is said to be contracting . _+ basically , a nonlinear time - varying dynamic system is called contracting if initial conditions or temporary disturbances are forgotten exponentially fast , i.e. , if trajectories of the perturbed system return to their nominal behavior with an exponential convergence rate .it turns out that relatively simple conditions can be given for this stability - like property to be verified .furthermore this property is preserved through basic system combinations , such as parallel combinations , feedback combinations , and series or hierarchies , yielding simple tools for modular design . for linear time - invariant systems, contraction is equivalent to strict stability .consider a system & = & \left [ \begin{array } { cc } f_{11 } & 0\\ f_{21 } & f_{22 } \end{array } \right ] \left [ \begin{array } { c } \delta z_1\\ \delta z_2 \end{array } \right ] \label{equation13}\end{aligned}\ ] ] + where and represent the first and the second system of dmp and the represent associated differential displacements ( see ) .equation ( [ equation13 ] ) display a hierarchy of contracting systems , and furthermore since is bounded by construction of , the whole system globally exponentially converges to a single trajectory .we can also extend the hierarchical contraction property to the rhythmic dmps , since the canonical system , which is shown below is contracting . although the system will eventually contract to the point , there will be a time delay due to the hierarchy between second and the first system .we can decrease this delay by increasing the number of weights in our equation .using contraction theory , stability of the dmps can be analyzed .once the original trajectory is mapped into the dmp , the system behaves linearly for a given input - output relation as shown before .moreover , contraction property guarantees the convergence into a single trajectory . from linearity , it is easy to show that learning the trajectories is not constrained by the stationary goal points that do not have a velocity components , which are required for equilibrium points in virtual trajectories .in this section , we use partial contraction theory to couple dmps .one - way coupling configuration of contraction theory allows a system to converge to its coupled pair smoothly .theory for the one - way coupling states the following two systems : in a given formula , if is contracting , then from any initial condition .a typical example for one way coupling is an observer design while the first system represents the real plant and the second system represents the mathematical model of the first system .the states of the second system will converge to the states of the first system and result in the robust estimation of the real system states .however , for our experiments , we interpret contraction as to imitate the transition between two states .it will be shown in section iv how the end of one trajectory becomes the initial condition of the second trajectory and contraction accomplishes the smooth transition . in dmps , we couple the two systems using the following equations : a toy example of the equations listed above can be seen in fig .[ one2 ] . in thissetting , is the first trajectory primitive , which contracts to the second trajectory primitive .one - way coupling has many advantages as a method over its precedents : in , trajectories are achieved by simply stretching the original trajectory in its coordinates and there is a direct relation between initial and end points . also , there are discontinuities in terms of derivatives of the trajectory at the transition regions between primitives .giese solves the problem of discontinuities by first taking the derivatives of the original trajectories , then combining the derivatives , and finally integrating them again using initial conditions .however , this method adversely affects the accuracy of the trajectories .hence , our method improves on and by generating more accurate trajectories independent of initial points . in , snapshots of the pilot s maneuversare taken and evaluated as noisy measurements of hidden and true trajectory . in their model , time indexes are used for the comparison of expert s demonstrations .maximization of the joint likelihood of demonstrations are achieved through trajectory learning algorithms . as was done in ,locally weighted learning is used for learning system dynamics close to trajectories . moreover ,desired trajectories are supervised by adding information specific to each maneuver . with the help of feasible trajectory , optimal controller and system dynamics along the maneuver, they achieved remarkable results on model helicopters .however , finding hidden trajectory requires noteworthy computational performance where they smooth out data to emphasize the similarities .in addition , their algorithm applies only for mimicking demonstrations . in our algorithm , learning the hidden and true trajectory of maneuvers can simply be done by comparing the weights of dmps ( see ) .it is also easier to manipulate dmps by changing parameters ( and ) for new challenges .moreover , our method lies on the background of biological experiments in such a way that it is adaptable for further research . in general , we summarize the advantages for using dynamical systems as control policies as follows : * it is easy to incorporate perturbations to dynamical systems . *it is easy to represent the primitives . *convergence to the goal position is guaranteed due to the attractor dynamics of dmp . *it is easy to modify for different tasks .* at the transition regions , discontinuities are avoided . *partial contraction theory forces the coupling from any initial condition . also in , schaal s suggested system is driven between stationary points .however , biological experiments suggest that we are faced with a `` virtual trajectory '' composed of equilibrium points that has velocity components . for this reason, we showed that we can achieve this property by combining nonconstant points .here , we apply the motion primitives on the helicopter .we used quanser helicopter ( see figure [ fig : quanser ] ) in our experiments .the helicopter is an under - actuated system having two propellers at the end of the arm .two dc motors are mounted below the propellers to create the forces which drive propellers .the motors axes are parallel and their thrust is vertical to the propellers .we have three degrees of freedom ( dof ) : pitch ( vertical movement of the propellers ) , roll ( circular movement around the axis of the propellers ) and travel ( movement around the vertical base ) in contrast with conventional helicopters with six degrees of freedom . in systemmodel , the origin of our coordinate system is at the bearing and slip - ring assembly .the combinations of actuators form the collective and cyclic forces which are used as inputs in our controller .the schematics of helicopter are shown in figures [ fig : heli_draw ] and [ fig : topview ] .let , , and denote the moment of inertia of our system dynamics . for simplicity , we ignore the products of inertia terms . the equations of motion are as follows ( cf .ishutkina ) : where * is the total mass of the helicopter assembly , * is the mass of the rotor assembly , * is the length of the main beam from the slip - ring pivot to the rotor assembly , * , , are travel , pitch and roll angles respectively .* is the distance from the rotor pivot to each of the propellers , * , * and are the effective drag coefficients times the reference area and is the density of air .it can be seen that the above system is nonlinear in the states , but linear in terms of control inputs . in practice , we used feedback linearization with bounded internal dynamics ( see bayraktar ) for a 3dof helicopter , which tracks trajectories in elevation and travel . in this section , we first describe our numerical simulation of the proposed primitive framework .second , we describe our actual experiment on the quanser helicopter . in experimental setup, we used an operator with a joystick to create aggressive trajectories to pass an obstacle .however , generating aggressive trajectories with the joystick is a difficult task even for the operator .therefore , we designed an augmented control for the joystick to enhance the performance of the helicopter . in detail, we used `` up '' and `` down '' movements of the joystick to increase or decrease the that is applied to the actuators .for the `` right '' and `` left '' movements of the joystick , we preferred to control the roll angle using pd control . in the original maneuver , the obstacle s distance and the highest point are in the coordinates where and angles are and respectively andthe helicopter stops at the coordinates where , and ( see figure [ original ] ) . from several demonstrations, it is observed that our operator follows two distinct pattern to carry out the maneuver .accordingly , these two patterns suggest an equilibrium point at the top of the obstacle . therefore , to fly over different obstacles , the acquired primitiveis segmented into two primitives at the highest pitch angle .[ pitch1 ] and fig .[ pitch2 ] show the results of dmp algorithm for the pitch angle .the top left graphs are results for pitch angles , where green lines represent the operator input for the trajectories and blue lines represent the fittings that the dmp computes for different start and end points .hence , desired trajectories in these graphs are not on top of the trajectories generated by the operator .other graphs show the time evolution of the dmp parameters .the two primitives created in the previous sections are defined as trajectories between certain start and end points .however , the end point of the first trajectory does not necessarily matches with the starting point of the second trajectory .we use partial contraction theory to force the first trajectory to converge to the second one .however , since we want to use the contraction as a transition between two trajectories , coupling is enabled towards the end of first primitive .figure [ fig : combined ] shows how the two trajectories evolve in time .in the first primitive , the goal positions of and angles are changed to and respectively , where original angles are and . in the secondprimitive , the goal position of the angle is changed from to .tracking performance of the helicopter is shown in figure [ tracking ] .it is seen that the helicopter followed the desired ( and ) angles almost perfectly . however , the trajectory of the roll angle is a bit different than the desired since we control two parameters ( and ) and the goal positions of the dmps are different .but we should highlight the fact that two roll trajectories follow the same pattern . in figure ,the last part of the roll trajectory manifests an oscillation which can be prevented by roll control , since the other parameters are almost constant .the tracking performance can further be improved by applying discrete nonlinear observers to get better velocity and acceleration values .figure [ fig : snapshots ] shows snapshots of the maneuver .dmp algorithm can be improved by replacing the first system with the equations shown below : which is equivalent of by introducing two first - order filters , we guarantee the stability of the system against time varying parameters like or . since the system is linear without the -function ( eq.[f_function ] ) , we achieve learning and modulation properties of dmp using the in either eq.([f1 ] ) or eq.([f2 ] ) . for further applications, we will use this model to generate primitives for time - varying goal points .experiments on frog s spinal cord suggest that movement primitives can be generated from linear combinations of vectorial force fields which lead the limb of a frog to the virtual equilibrium points . in , it is also pointed out that vectorial summation of two force fields with different equilibrium points generate a new force field whose equilibrium point is at intermediate location of the original equilibrium points . in this perspective , we will use two methods to generate new primitives .consider a system where and represent the first and the second primitive respectively . from partial contraction theory, we say that and converge together exponentially , if is contracting .since dmps are already contracting , we achieve synchronization using contracting inputs . in fig.[synchronized rcp ] ( top ) , new primitive is a linear combination of sine and cosine primitives .also in the same figure , coupling forces accounts for oscillations before synchronization happens . in dmps ,as it was shown before , system behaves linearly and superposition applies .therefore , in the -function , linear combination of the weights from different primitives produce linear combination of primitives . for rhythmic dmps , as an example , we combine the weights of the sine and cosine primitives ( ) to generate a new primitive ( see fig .[ synchronized rcp ] ( bottom ) ) .however for a regular dmp , we can not achieve the desired trajectories although we have linearity which is because input `` '' point is not compatible with the weights changing with respect to the couplings .for this reason , we will simply modify the equations in our later research .in this paper , we use a novel approach , inspired by biological experiments and humanoid robotics , which uses control primitives to imitate the data taken from human - performed obstacle avoidance maneuver . in our model , dmp computes the trajectory dynamics so that we can generate complex primitive trajectories for given different start and end points , while one - way coupling ensures smooth transitions between primitives at the equilibrium points .we demonstrate our algorithm with an experiment . we generate a complex , aggressive maneuver , which our helicopter could follow within a given error bound with a desired speed .future research will be conducted on different combinations of primitives using partial contraction theory .we expect these techniques to be particularly useful when the system dynamic models are very coarse , as e.g. in the case of flapping wing systems and new bio - inspired underwater vehicles .we extend our warm thanks to prof .e. feron and his phd .student s. bayraktar for the opportunity to use their quanser helicopter .ng , daishi harada and shankar sastry . autonomous helicopter flight via reinforcement learning . in _ neural information processing systems_ 16 , 2004 a. coates , p. abbeel , a.y .ng . learning for control from multiple demonstrations . in _ proceedings of the twenty - fifth international conference on machine learning _ , 2008 .j. bagnell and j. schneider .autonomous helicopter control using reinforcement learning policy search methods . in_ international conf .robotics and automation_. ieee , 2001 .s. schaal , j. peters , j. nakanishi , a. ijspeert .control , planning , learning , and imitation with dynamic movement primitives ._ ieee / rsj international conference on intelligent robots and systems _ , 2003 .v. gavrilets , i. martinos , b. mettler , e. feron .control logic for automated aerobatic flight of a miniature helicopter . _ aiaa guidance , navigation , and control conference and exhibit , monterey , california _ , 2002 .shim , h.j .kim , s. sastry .control system design for rotorcraft - based unmanned aerial vehicles using time - domain system identification ._ proceedings of the 2000 ieee international conference on control applications _ , 2000 .b. mettler , e. bachelder .combining on- and offline optimization techniques for efficient autonomous vehicle s trajectory planning ._ aiaa guidance , navigation , and control conference and exhibit _ , 2005 .j. mezger , w. ilg , m.a .trajectory synthesis by hierarchial spatio - temporal correspondence : comparison of different methods ._ proceedings of the 2nd symposium on applied perception in graphics and visualization _, 2005 . f. a. mussa - ivaldi .nonlinear force fields : a distributed system of control primitives for representing an learning movements . _ ieee international symposium on computational intelligence in robotics and automation _ , 1997 .
we introduce a simple framework for learning aggressive maneuvers in flight control of uavs . having inspired from biological environment , dynamic movement primitives are analyzed and extended using nonlinear contraction theory . accordingly , primitives of an observed movement are stably combined and concatenated . we demonstrate our results experimentally on the quanser helicopter , in which we first imitate aggressive maneuvers and then use them as primitives to achieve new maneuvers that can fly over an obstacle .
recently bernal and snchez proved that causal simplicity , usually defined by imposing the two properties , ( a ) distinction and ( b ) for all the sets and are closed ( this property is equivalent to , see ( * ? ? ? * sect . 3.10 ) ) , can actually be improved by replacing ( a ) with the weaker requirement of causality . in this worki give a result which goes in the same direction of optimizing the definitions and results underlying the causal hierarchy of the spacetimes .causal continuity is usually defined by imposing the conditions ( i ) distinction and ( ii ) reflectivity .the distinction condition was defined , quite naturally , by hawking and sachs as the imposition of both future and past distinction . at the time , kronheimer and penrose had already defined the past , future and the weak distinction properties as follows : a spacetime is future distinguishing if ; past distinguishing if ; and weakly distinguishing if `` '' .clearly , future ( past ) distinction implies weak distinction and there are examples of spacetimes which are weakly distinguishing but neither future nor past distinguishing ( see figure [ wdis](b ) ) , thus weak distinction is a strictly weaker property than future or past distinction . nevertheless , in this worki am going to prove ( corollary [ cor ] ) that condition ( i ) defining causal continuity can be replaced with ( i ) _ feeble distinction _ , a property which , as i will show , is even weaker than weak distinction .this result comes from a interesting lemma which mixes future and past properties ( otherwise usually found separated in other theorems ) , namely feeble distinction and future ( past ) reflectivity implies past ( resp .future ) distinction ( theorem [ psr ] ) .i denote with a spacetime ( connected , time - oriented lorentzian manifold ) , of arbitrary dimension and signature .on the usual product topology is defined .the subset symbol is reflexive , i.e. . the closure of the causal future on is denoted , that is , . for other notations concerning causal setsthe reader is referred to .is antisymmetric while and are not .note that both future and past reflectivity fail to hold as one should expect from theorem [ psr ] ., width=302 ]since the property of weak distinction has been only marginally used in causality theory i devote a few pages to its study , in particular i develop some equivalent characterizations .the reader is assumed to be familiar with the approach to causal relations as subsets of ( see and ) .recall that the relations on are reflexive and transitive .moreover , the spacetime is future ( past ) distinguishing iff ( resp . ) is antisymmetric .define so that recall that iff the spacetime is future reflecting and iff the spacetime is past reflecting ( for other equivalent characterizations of reflectivity see ) .since , it is iff and .this observation reads a spacetime is reflecting iff .the antisymmetry condition for is equivalent to weak distinction , indeed it holds .[ mos ] the following conditions on a spacetime are equivalent 1 .[ 1 ] and imply .2 . [ 2 ] is antisymmetric .3 . [ 4 ] the map defined by is injective .[ 5 ] the map defined by is injective .[ 6 ] the map defined by is injective .[ 1 ] [ 2 ] .assume is not antisymmetric then there are and , such that and which reads `` and and and '' . and implies while and implies , thus [ 1 ] does not hold .[ 2 ] [ 1 ] .assume [ 1 ] does not hold .there are such that and , thus , , and analogously with the roles of and exchanged .thus and , i.e. is not antisymmetric . [ 4 ] [ 1 ] . indeed if [ 1 ] does not hold there are , such that and , thus , and hence [ 4 ] does not hold , a contradiction .[ 1 ] [ 4 ] .first , [ 1 ] implies that is chronological indeed , the existence of with would imply and which contradicts 1 . since is chronological for every event the sets and are disjoint .assume [ 4 ] does not hold then there are , such that , but given it is or .but the former possibility can not hold because it implies thus must belong to or both cases implying a violation of chronology .thus , and changing the roles of and , , i.e. . changing the roles of past and future which contradicts [ 1 ] , hence , by contradiction , [ 4 ] must hold . [ 2 ] ( [ 5 ] and [ 6 ] ) .it follows from theorem 2.3(c ) of .a spacetime is weakly distinguishing if it satisfies the equivalent properties of lemma [ mos ] .note that is transitive and reflexive because and are transitive and reflexive , moreover it is trivial to prove that if or is antisymmetric then is antisymmetric .this observation gives another proof of the well known , already mentioned in the introduction , result that if is past or future distinguishing then it is weakly distinguishing .it is easy to prove that past or future distinction at implies weak distinction at , however , it is a non trivial matter to find a counterexample of the converse .an example of a weakly distinguishing spacetime in which there is a event at which the spacetime is neither future nor past distinguishing can be obtained from the spacetime of figure [ feeble ] by removing the point .the possibility of expressing weak distinction through the injectivity of the map , is a consequence of the reflexivity and transitivity of ( see theorem 2.3(c ) of ) .actually , other causality properties such as strong causality can be characterized in terms of the injectivity of a suitable causal set function although , as far as i know , strong causality can not be obtained as an antisymmetry condition for a suitable reflexive and transitive causal relation .the idea of expressing the causality conditions as injectivity conditions on causal set functions goes back to i. rcz . weak distinction and the causal structures of kronheimer and penrose intimately related as the next two lemmas prove . is a left -ideal , that is , and , analogously is a right -ideal , that is , and . is a -ideal , and the triple is a causal structure in the sense of kronheimer and penrose iff the spacetime is weakly distinguishing .it follows trivially from the definitions and from the fact that is open .let be a set of relations labeled by an index .note that if for every then , thus there is the largest set with respect to which is a left ideal .this largest set necessarily contains because .analogous considerations hold for the property and `` and '' .the set is the largest set which satisfies , analogously , is the largest set which satisfies and is the largest set which satisfies both and . in particular if is a causal structure in the sense of kronheimer and penrose then .we already know that .assume there is , , such that .take then for every such that it is , but can be characterized as ( see the proof of lemma 4.2 ) , thus a contradiction .analogously , is the largest set which satisfies .we already know that and . if is not the largest set which satisfies this property then there is , such that and .the pair ca nt belong to both and , assume without loss of generality , then is larger than and satisfies a contradiction .future distinction is equivalent to the antisymmetry of however , it is also equivalent to an apparently weaker requirement , namely and .it is natural to ask whether weak distinction can be expressed as : and .as we shall see , the answer is negative and the property defines a new level in the causal ladder which stays between weak distinction and causality .[ lhf ] the following conditions on a spacetime are equivalent 1 . , and imply .2 . and .+ 1 2 .assume 2 does not hold then there are and , such that which reads `` and and '' . and implies while and implies , thus [ 1 ] does not hold. 2 1 .assume 1 does not hold .there are such that and , thus , .thus and , but , a contradiction .a spacetime is said to be _ feebly distinguishing _ if it satisfies one of the equivalent properties of lemma [ lhf ] . in short, a spacetime is feebly distinguishing if there is no pair of causally related events with the same chronological pasts and futures .if a spacetime is weakly distinguishing then it is feebly distinguishing .if a spacetime if feebly distinguishing then it is causal .it follows trivially from the fact that .it is easy to check that feeble distinction differs from causality , see for instance the spacetime of figure [ wdis](a ) .it is instead a non trivial matter to establish that feeble distinction differs from weak distinction .figure [ feeble ] gives an example of feebly distinguishing non - weakly distinguishing spacetime . and share the same chronological past and future and form the only pair of events with this property .they are not causally related thanks to the removed sets , thus the spacetime is feebly distinguishing.,width=377 ] since the construction of the spacetime is quite involved i offer the next explanation .the starting point is a spacetime of coordinates , ] and $ ] on the axis have the same chronological future , and the points on the segments and have the same chronological past .another method , perhaps simpler , to define the filter is as follows .i describe it for the filter denoted in the figure , the other cases being analogous .take a sequence , , and define .finally , define .this filter has the same causal effect of the one described using the taylor expansion , however , note that it has a characteristic flower shape and not an elliptic one as in the figure .finally , i give the proof to the results mentioned in the introduction .[ psr ] if the spacetime is future ( past ) reflecting then ( resp . ) .in particular , feeble distinction and future ( past ) reflectivity imply past ( resp .future ) distinction . if the spacetime is future reflecting then , but thus .thus under future reflectivity , and is equivalent to and which by feeble distinction implies .the proof in the other case is analogous .[ cor ] a spacetime is causally continuous iff it is feebly distinguishing and reflecting .the only if part is trivial as it follows from the usual definition of causal continuity as a spacetime which is distinguishing and reflecting . for the if part note that by theorem [ psr ] , feeble distinction and past reflectivity imply future distinction .moreover , feeble distinction and future reflectivity imply past distinction , thus feeble distinction and reflectivity imply distinction .thus the spacetime is distinguishing and using again the assumed reflectivity the causal continuity follows .feeble distinction implies causality , however , in the definition of causal continuity causality can not replace feeble distinction , indeed it is quite easy to construct an example of spacetime which is causal , reflecting and non - feebly distinguishing ( and hence non - causally continuous ) , see figure [ wdis](a ) .actually , this example is also non - total imprisoning .i shall prove in a related work that feeble distinction implies non - total imprisonment which implies causality .however , in the definition of causal continuity feeble distinction can not even be relaxed to non - total imprisoning as the example of figure [ wdis](a ) again proves .the property of weak distinction has been studied showing that it is equivalent to the antisymmetry of the causal relation defined by eq .( [ nis ] ) .the set enters also in an alternative definition of reflectivity , namely the condition .between weak distinction and causality i defined another level , called _feeble distinction_. examples have been provided which show that feeble distinction indeed differs from weak distinction and causality ( actually it differs from non - total imprisonment ) .next a basic step has been the proof that feeble distinction and future ( past ) reflectivity implies past ( resp .future ) distinction , a curious statement that mixes future and past properties . using it, it has been finally shown that in the definition of causal continuity it is possible to replace the distinction property with feeble distinction .some known examples prevent the possibility of weakening the feeble distinction property to the level which stays immediately below it in the causal ladder .since the causal ladder is not fixed , and new levels can always be found , there is some natural uncertainty on what this optimality could mean . in any casei will show in a related work that the non - total imprisonment property stays between feeble distinction and causality , and figure [ wdis](a ) proves that in the definition of causal continuity , feeble distinction can not be replaced by non - total imprisonment . in this sense, the definition of causal continuity given in this work is optimal .
causal continuity is usually defined by imposing the conditions ( i ) distinction and ( ii ) reflectivity . it is proved here that a new causality property which stays between weak distinction and causality , called _ feeble distinction _ , can actually replace distinction in the definition of causal continuity . an intermediate proof shows that feeble distinction and future ( past ) reflectivity implies past ( resp . future ) distinction . some new characterizations of weak distinction and reflectivity are given .
a quantity of central interest in thermodynamics and statistical physics is the ( helmholtz ) free - energy , as it determines the equilibrium properties of the system under consideration . in practical applications , e.g. drug design, molecular association , thermodynamic stability , and binding affinity , it is usually sufficient to know free - energy differences . as recent progress in statistical physicshas shown , free - energy differences , which refer to equilibrium , can be determined via non - equilibrium processes . typically , free - energy differences are beyond the scope of analytic computations and one needs to measure them experimentally or compute them numerically .highly efficient methods have been developed in order to estimate free - energy differences precisely , including thermodynamic integration , free - energy perturbation , umbrella sampling , adiabatic switching , dynamic methods , asymptotics of work distributions , optimal protocols , targeted and escorted free - energy perturbation . a powerful and frequently used method for free - energy determination is two - sided estimation , i.e. bennett s acceptance ratio method , which employs a sample of work values of a driven nonequilibrium process together with a sample of work values of the time - reversed process .the performance of two - sided free - energy estimation depends on the ratio of the number of forward and reverse work values used .think of an experimenter who wishes to estimate the free - energy difference with bennett s acceptance ratio method and has the possibility to generate forward as well as reverse work values .the capabilities of the experiment give rise to an obvious question : if the total amount of draws is intended to be , which is the optimal choice of partitioning into the numbers of forward and of reverse work values , or equivalently , what is the optimal choice of the ratio ?the problem is to determine the value of that minimizes the ( asymptotic ) mean square error of bennett s estimator when is held constant . while known since bennett , the optimal ratio is underutilized in the literature . bennett himself proposed to use a suboptimal equal time strategy , instead , because his estimator for the optimal ratio converges too slowly in order to be practicable .even questions as fundamental as the existence and uniqueness are unanswered in the literature . moreover , it is not always clear a priori whether two - sided free - energy estimation is better than one - sided exponential work averaging .for instance , shirts et al . have presented a physical example where it is optimal to draw work values from only one direction .the paper is organized as follows : in secs .[ sec:2 ] and [ sec:3 ] we rederive two - sided free - energy estimation and the optimal ratio .we also remind that two - sided estimation comprises one - sided exponential work averaging as limiting cases for , a result that is also true for the mean square errors of the corresponding estimators .the central result is stated in sec .[ sec:4 ] : the asymptotic mean square error of two - sided estimation is convex in the fraction of forward work values used .this fundamental characteristic immediately implies that the optimal ratio exists and is unique .moreover , it explains the generic superiority of two - sided estimation if compared with one - sided , as found in many applications . to overcome the slow convergence of bennett s estimator of the optimal ratio , which is based on estimating second moments , in sec .[ sec:5 ] we transform the problem into another form such that the corresponding estimator is entirely based on first moments , which enhances the convergence enormously . as an application , in sec .[ sec:7 ] we present a dynamic strategy of sampling forward and reverse work values that maximizes the efficiency of two - sided free - energy estimation ._ given _ a pair of samples of forward and reverse work values drawn from the probability densities and of forward and reverse work values and provided the latter are related to each other via the fluctuation theorem , bennett s acceptance ratio method is known to give the optimal estimate of the free - energy difference in the limit of large sample sizes . throughout the paper , and understood to be measured in units of the thermal energy .the normalized probability densities and are assumed to have the same support , and we choose the following sign convention : and .( color online ) the overlap density bridges the densities and of forward and reverse work values , respectively . is the fraction of forward work values , here schematically shown for , , and .the accuracy of two - sided free - energy estimates depends on how good is sampled when drawing from and . ]now define a normalized density with , where ] , and thus bridges between and , see fig .[ fig:1 ] . in the limit , converges to the forward work density , and conversely for it converges to the reverse density . as a consequence of the inequality of the harmonic and arithmetic mean , ^{-1 } \leq { \alpha}{p_{1}}+{\beta}{p_{0}} ] .except for and , the equality holds if and only if . using the fluctuation theorem , can be written as an average in and , where the angular brackets with subscript ] .different values of result in different estimates for .choosing , the estimate coincides with bennett s optimal estimate , which defines the two - sided estimate with least asymptotic mean square error for a given value , or equivalently , _ for a given ratio _ .we denote the optimal two - sided estimate , i.e. the solution of eq . under the constraint , by andsimply refer to it as the two - sided estimate .note that the optimal estimator can be written in the familiar form in the limit the two - sided estimate reduces to the one - sided forward estimate , , and conversely .thus the one - sided estimates are the optimal estimates if we have given draws from only one of the densities or . a characteristic quantity to express the performance of the estimate is the mean square error , which depends on the total sample size and the fraction .here , the average is understood to be an ensemble average in the value distribution of the estimate for fixed and . in the limit of large and , the asymptotic mean square error ( which then equals the variance )can be written provided the r.h.s .exists , which is guaranteed for any , the -dependence of is simply given by the usual -factor , whereas the -dependence is determined by the function given in eq . .note that if a two - sided estimate is calculated , then essentially the normalizing constant is estimated from two sides , and , cf .eqs . and .with an estimate we therefore always have an estimate of the mean square error at hand . however , the reliability of the latter naturally depends on the degree of convergence of the estimate .the convergence of the two - sided estimate can be checked with the convergence measure introduced in ref . . in the limits and , respectively , the asymptotic mean square error of the two - sided estimator converges to the asymptotic mean square error of the appropriate one - sided estimator , and where denotes the variance operator with respect to the density , i.e. for an arbitrary function and ] minimizes the mean square error when the total sample size , , is held fixed ?let be the rescaled asymptotic mean square error given by which is a function of only .assuming , a necessary condition for a minimum of is that the derivative of vanishes at . before calculating explicitly ,it is beneficial to rewrite by using the identity subtracting from eq . and recalling the definition of , one obtains { u_{{\alpha}}}^2,\end{aligned}\ ] ] where the functions are defined as and describe the relative fluctuations of the quantities that are averaged in the two - sided estimation of , cf .eq . . with the use of formula, can be written and the derivative yields the derivatives of the -functions involve the first two derivatives of , which will thus be computed first : and from this equation it is clear that is convex in , , with a unique minimum in ( as ) .we can rewrite the -functions with and as follows : differentiating these expressions gives and are monotonically increasing and decreasing , respectively .this immediately follows from writing the term occurring in the brackets of eqs . as a variance in the density , which is thus positive . as a consequence of eq . , the relation \end{aligned}\ ] ] holds and reduces to the derivatives of the -functions do not contribute to due to the fact that the special form of the two - sided estimator originates from minimizing the asymptotic mean square error , cf .the necessary condition for a local minimum of at , , now reads where is introduced . using eqs . and , the condition results in this means , the optimal ratio is such that the variances of the random functions which are averaged in the two - sided estimation are equal .however , the existence of a solution of is not guaranteed in general . writing eq . in the form the equation from becoming a tautology .the asymptotic mean square error is convex in . in order to prove the convexity, we introduce the operator which is defined for an arbitrary function by is positive semidefinite , i.e. for and , the equality holds if and only if .let , ] ) there .in situations of practical interest the optimal ratio is not available _ a priori_. thus , we are going to estimate the optimal ratio .there exist estimators of the optimal ratio since bennett .in addition we have just proven that the optimal ratio exists and is unique .however there is still one obstacle to overcome .yet , all expressions for estimating the optimal ratio are based on second moments , see e.g. eq . .due to convergence issues , it is not practicable to base any estimator on expressions that involve second moments. the estimator would converge far too slowly .for this reason , we transform the problem into a form that employs first moments , only .assume we have given and work values in forward and reverse direction , respectively , and want to estimate , with .according to eq .we can estimate the overlap measure by using draws from the forward direction , where equals and for the best available estimate of is inserted , i.e. the two - sided estimate based on the work values .similarly , we can estimate the overlap measure by using draws from the reverse direction , since in general draws from both directions are available , it is reasonable to take an arithmetic mean of both estimates where the weighting is chosen such that the better estimate , or , contributes stronger : with increasing the estimate becomes more reliable , as is the normalizing constant of the bridging density , eq . , and ; and conversely for decreasing .from the estimate of the overlap measure we can estimate the rescaled mean square error by for all , a result that is entirely based on first moments .the infimum of finally results in an estimate of the optimal choice of , when searching for the infimum , we also take into account which follow from a series expansion of eq . in at and , respectively .the costs of measuring a work value in forward direction may differ from the costs of measuring a work value in reverse direction .the influence of costs on the optimal ratio of sample sizes is investigated here .different costs can be due to a direction dependent effort of experimental or computational measurement of work ( unfolding a rna may be much easier than folding it ) .we assume the work values to be uncorrelated , which is essential for the validity of the theory presented in this paper .thus , a source of nonequal costs , which arises especially when work values are obtained via computer simulations , is the difference in the strength of correlations of consecutive monte - carlo steps in forward and reverse direction . to achieve uncorrelated draws , the `` correlation - lengths '' or `` correlation - times '' have to be determined within the simulation , too .however , this is advisable in any case of two - sided estimation , independent of the sampling strategy .let and be the costs of drawing a single forward and reverse work value , respectively .our goal is to minimize the mean square error while keeping the total costs constant . keeping constant results in which in turn yields if a local minimum exists , it results from which leads to a result bennett was already aware of .however , based on second moments , it was not possible to estimate the optimal ratio accurately and reliably .hence , bennett proposed to use a suboptimal _ equal time strategy _ or _equal cost strategy _ , which spends an equal amount of expenses to both directions , i.e. or where is the equal cost choice for .this choice is motivated by the following result \end{aligned}\ ] ] which states that the asymptotic mean square error of the equal cost strategy is at most sub - optimal by a factor of .note however that the equal cost strategy can be far more sub - optimal if the asymptotic limit of large sample sizes is not reached .since we can base the estimator for the optimal ratio on first moments , see sec .[ sec:5 ] , we propose a _dynamic strategy _ that performs better than the equal cost strategy .the infimum of results in the estimate of the optimal choice of , we remark that opposed to , is not necessarily convex .however , a global minimum clearly exists and can be estimated .suppose we want to estimate the free - energy difference with the acceptance ratio method , but have a limit on the total amount of expenses that can be spend for measurements of work . in order to maximize the efficiency ,the measurements are to be performed such that finally equals the optimal fraction of forward measurements .the dynamic strategy is as follows : 1 . in absence of preknowledge on , we start with bennetts equal cost strategy as an initial guess of .2 . after drawing a small number of work valueswe make preliminary estimates of the free - energy difference , the mean square error , and the optimal fraction .3 . depending on whether the estimated rescaled mean square error is convex , which is a necessary condition for convergence , our algorithm updates the estimate of .further work values are drawn such that dynamically follows , while is updated repeatedly .there is no need to update after each individual draw . splitting the total costs into a sequence , not necessarily equidistant, we can predefine when and how often an update in is made .namely , this is done whenever the actually spent costs reach the next value of the sequence .the dynamic strategy can be cast into an algorithm .set the initial values , . in the -th step of the iteration , ,determine with where means rounding to the next lower integer .then , additional forward and additional reverse work values are drawn . using the entire present samples , an estimate of calculated according to eq . .with the free - energy estimate at hand , is calculated for all values of $ ] via eqs . and, discretized , say in steps .if is convex , we update the recent estimate of to via eqs . and . otherwise , if is not convex , the corresponding estimate of is not yet reliable and we keep the recent value , .increasing by one , we iteratively continue with eq .until we finally obtain which is the optimal estimate of the free - energy difference after having spend all costs .note that an update in may result in negative values of or .should happen to be negative , we set and we proceed analogously , if happens to be negative .the optimal fraction depends on the cost ratio , i.e. the algorithm needs to know the costs and . however , the costs are not always known in advance and may also vary over time .think of a long time experiment which is subject to currency changes , inflation , terms of trade , innovations , and so on . of advantageis that the dynamic sampling strategy is capable of incorporating varying costs . in each iteration step of the algorithm one just inserts the actual costs .if desired , the breakpoints may also be adapted to the actual costs .should the costs initially be unknown ( e.g. the `` correlation - length '' of a monte - carlo simulation needs to be determined within the simulation first ) one may use any reasonable guess until the costs are known .for illustration of results we choose exponential work distributions , . according to the fluctuation theorem we have and .the main figure displays the exponential work densities ( thick line ) and ( thin line ) for the choice of and , according to the fluctuation theorem , .the inset displays the corresponding boltzmann distributions ( thick ) and ( thin ) both for . here, is set equal to arbitrarily , hence .the free - energy difference is . ]exponential work densities arise in a natural way in the context of a two - dimensional harmonic oscillator with boltzmann distribution , where is a normalizing constant ( partition function ) and . drawing a point from the initial density , defined by setting , and switching the frequency to amounts in the work .the probability density of observing a specific work value is given by the exponential density with .switching the frequency in the reverse direction , , with the point drawn from with , the density of work ( with interchanged sign ) is given by with .the free - energy difference of the states characterized by and is the log - ratio of their normalizing constants , . a plot of the work densities for is enclosed in fig .[ fig:2 ] .now , with regard to free - energy estimation , is it better to use one- or two - sided estimators ?in other words , we want to know whether the global minimum of is on the boundaries of or not . by the convexity of ,the answer is determined by the signs of the derivatives and at the boundaries .the asymptotic mean square errors and of the one - sided estimators are calculated to be for the forward direction and for the reverse direction . for variance of the reverse estimator diverges .note that holds for all , i.e. forward estimation of is always superior if compared to reverse estimation .furthermore , a straightforward calculation gives where , and and for .thus , for the range we have as well as and therefore , i.e. the forward estimator is superior to any two - sided estimator in this range .for we have and , specifying that , i.e. two - sided estimation with an appropriate choice of is optimal . the overlap function and the rescaled asymptotic mean square error for .note that diverges for . ]numerical calculation of the function and subsequent evaluation of allows to find the `` exact '' optimal fraction .examples for and are plotted in fig .[ fig:3 ] . the optimal fraction of forward work values for the two - sided estimation in dependence of the average forward work . for one - sided forward estimator is optimal , i.e. . ]the behavior of as a function of is quite interesting , see fig .[ fig:4 ] .we can interpret this behavior in terms of the boltzmann distributions as follows . without loss of generality ,assume is fixed . increasing then means increasing .the density is fully nested in , cf . the inset of fig. [ fig:2 ] ( remember that ) and converges to a delta - peak at the origin with increasing .this means that by sampling from we can obtain information about the full density quite easily , whereas sampling from provides only poor information about .this explains why holds for small values of .however , with increasing the density becomes so narrow that it becomes difficult to obtain draws from that fall into the main part of .therefore , it is better to add some information from , hence , decreases . increasing further ,the relative number of draws needed from will decrease , as the density converges towards the delta distribution .finally , it will become sufficient to make only _ one _ draw from in order to obtain the full information available .therefore , converges towards in the limit .( color online ) example of a single run using the dynamic strategy : the optimal fraction of forward measurements for the two - sided free - energy estimation is estimated at predetermined values of total sample sizes of forward and reverse work values .subsequently , taking into account the current actual fraction , additional work values are drawn such that we come closer to the estimated . ]( color online ) displayed are estimated mean square errors in dependence of for different sample sizes .the global minimum of the estimated function determines the estimate of the optimal fraction of forward work measurements . ]( color online ) comparison of a single run of free - energy estimation using the equal cost strategy versus a single run using the dynamic strategy .the errorbars are the square roots of the estimated mean square error . ] in the following the dynamic strategy proposed in sec .[ sec:7 ] is applied .we choose and .the equal cost strategy draws according to which is used as initial value in the dynamic strategy .the results of a single run are presented in figs .[ fig:5][fig:7 ] . starting with ,the estimate of is updated in steps of .the actual forward fractions together with the estimated values of the optimal fraction are shown in fig .[ fig:5 ] .the first three estimates of are rejected , because the estimated function is not yet convex .therefore , remains unchanged at the beginning .afterwards , follows the estimates of and starts to fluctuate about the `` exact '' value of .some estimates of the function corresponding to this run are depicted in fig .[ fig:6 ] . for these estimates is discretized in steps .remarkably , the estimates of that result from these curves are quite accurate even for relatively small . finally , fig .[ fig:7 ] shows the free - energy estimates of the run ( not for all values of ) , compared with those of a single run where the equal cost strategy is used .we find some increase of accuracy when using the dynamic strategy .( color online ) averaged estimates from independent runs with dynamic strategy versus runs with equal cost strategy in dependence of the total cost spend .the cost ratio is , , and .the errorbars represent one standard deviation . here , the initial value of in the dynamic strategy is , while the equal cost strategy draws with .we note that . ]( color online ) displayed are mean square errors of free - energy estimates using the same data as in fig .[ fig:8 ] .in addition , the mean square errors of estimates with constant are included , as well as the asymptotic behavior , eq . .the inset shows that the mean square error of the dynamic strategy approaches the asymptotic optimum , whereas the equal cost strategy is suboptimal .note that for small sample sizes the asymptotic behavior does not represent the actual mean square error . ] in combination with a good a priori choice of the initial value of , the use of the dynamic strategy enables a superior convergence and precision of free - energy estimation , see figs .[ fig:8 ] and [ fig:9 ] . due to insight into some particular system under consideration ,it is not unusual that one has a priori knowledge which results in a better guess for the initial choice of in the dynamic strategy than starting with .for instance , a good initial choice is known when estimating the chemical potential via widom s particle insertion and deletion .namely , it is a priori clear that inserting particles yields much more information then deleting particles , since the phase - space which is accessible to particles in the deletion - system " is effectively contained in the phase - space accessible to the particles in the insertion - system " , cf .e.g. .a good a priori initial choice for may be with which the dynamic strategy outperforms any other strategy that the authors are aware of .once reaching the limit of large sample sizes , the dynamic strategy is insensitive to the initial choice of , since the strategy is robust and finds the optimal fraction of forward measurements itself .two - sided free - energy estimation , i.e. the acceptance ratio method , employs samples of forward and reverse work measurements in the determination of free - energy differences in a statistically optimal manner . however , its statistical properties depend strongly on the ratio of work values used . as a central resultwe have proven the convexity of the asymptotic mean square error of two - sided free - energy estimation as a function of the fraction of forward work values used . from herefollows immediately the existence and uniqueness of the optimal fraction which minimizes the asymptotic mean square error .this is of particular interest if we can control the value of , i.e. can make additional measurements of work in either direction . drawing such that we finally reach , the efficiency of two - sided estimation can be enhanced considerably .consequently , we have developed a dynamic sampling strategy which iteratively estimates and makes additional draws or measurements of work .thereby , the convexity of the mean square error enters as a key criterion for the reliability of the estimates . for a simple example which allows to compare with analytic calculations, the dynamic strategy has shown to work perfectly . in the asymptotic limit of large sample sizesthe dynamic strategy is optimal and outperforms any other strategy .nevertheless , in this limit it has to compete with the near optimal equal cost strategy of bennett which also performs very good .it is worth mentioning that even if the latter comes close to the performance of ours , it is worthwhile the effort of using the dynamic strategy , since the underlying algorithm can be easily implemented and does cost quite anything if compared to the effort required for drawing additional work values .most important for experimental and numerical estimation of free - energy differences is the range of small and moderate sample sizes . for this relevant range , it is found that the dynamic strategy performs very good , too .it converges significantly better than the equal cost strategy .in particular , for small and moderate sample sizes it can improve the accuracy of free - energy estimates by half an order of magnitude .we close our considerations by mentioning that the two - sided estimator is typically far superior with respect to one - sided estimators : assume the support and and is symmetric about ; ] then , if the densities are symmetric to each other , , the optimal fraction of forward draws is by symmetry .therefore , if the symmetry is violated not too strongly , the optimum will remain near .continuous deformations of the densities change the optimal fraction continuously .thus , does not reach and , respectively , for some certain strength of asymmetry .it is exceptionally hard to violate the symmetry such that hits the boundary or . in consequence , in almost all situations , the two - sided estimator is superior .we thank andreas engel for a critical reading of the manuscript . c. jarzynski , phys .rev . lett . * 78 * , 2690 ( 1997 ) .g. e. crooks , phys .e * 60 * , 2721 ( 1999 ) .j. g. kirkwood , j. chem .phys . * 3 * , 300 ( 1935 ) .a. gelman and x .-meng , stat . science . * 13 * , 163 ( 1998 ) . r. w. zwanzig , j. chem .phys . * 22 * , 1420 ( 1954 ) .g. m. torrie and j. p. valleau , j. comput . phys . * 23 * , 187 ( 1977 ) .chen and q .- m .shao , annals of stat . *25 * , 1563 ( 1997 ) .h. oberhofer and c. dellago , comput .. comm . * 179 * , 41 ( 2008 ) .m. watanabe and w. p. reinhardt , phys .lett . * 65 * , 3301 ( 1990 ) .s. x. sun , j. chem .phys . * 118 * , 5769 ( 2003 ) .f. m. ytreberg and d. m. zuckerman , j. chem . phys . * 120 * , 10876 ( 2004 ) . c. jarzynski , phys .rev e * 73 * , 046105 ( 2006 ) .s. von egan - krieger and a. engel , arxiv:0807.4079 h. then and a. engel , phys .e * 77 * , 041105 ( 2008 ) .meng and s. schilling , j. comput .stat . * 11 * , 552 ( 2002 ) . c. jarzynski , phys .e * 65 * , 046122 ( 2002 ) .h. oberhofer , c. dellago , and s. boresch , phys .e * 75 * , 061106 ( 2007 ) .s. vaikuntanathan and c. jarzynski , phys .lett . * 100 * , 190601 ( 2008 ) . a. m. hahn and h. then ,e * 79 * , 011113 ( 2009 ) .meng and w. h. wong , stat . sin .* 6 * , 831 ( 1996 ) .a. kong , p. mccullagh , x .-meng , d. nicolae , and z. tan , j. r. stat . soc .b * 65 * , 585 ( 2003 ) .m. r. shirts and j. d. chodera , j. chem .* 129 * , 124105 ( 2008 ) .d. m. ceperley , rev .phys . * 67 * , 279 ( 1995 ) .d. frenkel and b. smit _ understanding molecular simulation _( academic press , london , 2002 ) .d. collin , f. ritort , c. jarzynski , s. b. smith , i. tinoco jr and c. bustamante , nature ( london ) * 437 * , 231 ( 2005 ) . c. h. bennett , j. comput .* 22 * , 245 ( 1976 ) .g. e. crooks , phys .e * 61 * , 2361 ( 2000 ) .m. r. shirts and v. s. pande , j. chem .phys . * 122 * , 144107 ( 2005 ) .m. r. shirts , e. bair , g. hooker , and v. s. pande , phys .lett . * 91 * , 140601 ( 2003 ) . j. gore , f. ritort and c. bustamante , proc .sci . * 100 * , 12564 ( 2003 ) .b. widom , j. chem .phys . * 39 * , 2808 ( 1963 ) .
a powerful and well - established tool for free - energy estimation is bennett s acceptance ratio method . central properties of this estimator , which employs samples of work values of a forward and its time reversed process , are known : for given sets of measured work values , it results in the best estimate of the free - energy difference in the large sample limit . here we state and prove a further characteristic of the acceptance ratio method : the convexity of its mean square error . as a two - sided estimator , it depends on the ratio of the numbers of forward and reverse work values used . convexity of its mean square error immediately implies that there exists an unique optimal ratio for which the error becomes minimal . further , it yields insight into the relation of the acceptance ratio method and estimators based on the jarzynski equation . as an application , we study the performance of a dynamic strategy of sampling forward and reverse work values .
ekert protocol uses entangled states to guarantee the secrecy of a key distributed to two parties ( alice and bob ) .identical measurements performed on a maximally entangled state yield perfect correlation , which can be used to produce a shared key ; while the secrecy of the key can be guaranteed by the violation of bell inequalities measured for non - identical measurements .an unconditional violation of bell inequalities would guarantee that no local ( hidden ) variables exist that an eavesdropper ( eve ) could exploit .it would mean unconditional privacy : eve could have full control of the detectors and the source , more advanced theory and technology , it would still be secure . however , practical implementations of ekert protocol have to be performed with photons , because a key distribution protocol is useful only if alice and bob can be separated by macroscopic distances .photons are also restricted by the use of polarizing beam splitter to a wavelength domain for which standard photon counters have a poor detection efficiency .it means that a rather heavy postselection is required : alice and bob must discard all measurements for which either of them failed to register a click at all .the trouble is that local hidden - variable models that exploit this weakness can reproduce exactly the predictions of quantum mechanics , as soon as the detection efficiency is lower than . in the context of experiments on the foundations of quantum mechanics , the assumption of fair samplingis usually considered reasonable to support a violation of bell inequalities , with the idea that nature is not conspiratory . in quantum key distribution however , eve is expected to conspire .alice and bob should therefore assume that their sample is biased by eve .failure to acknowledge this weakness would leave all freedom to eve to exploit it with a biased sample attack : the statistics on the detected sample would then only have the appearance of secrecy .this weakness should not be underestimated given that a successful quantum hacking has already been successfully implemented experimentally by means of a time - shifting attack .naturally , this issue becomes critical if eve manufactured the detectors , which means that alice and bob should thoroughly check that their detectors are functioning according to specifications .however , we will argue here that even if the detectors owned by alice and bob are genuine photomultipliers or avalanche photodiodes , eve could still in principle force a biased sampling on these detectors by exploiting the thresholds of these detectors .eve would only need to control the source and know the detectors well enough to exploit their thresholds , but she would not need to actually control them. we will thus propose below a fair sampling test to prevent such a biased - sample attack on threshold detectors .standard ekert protocol .alice and bob randomly switch their measurement settings .pairs associated with identical measurement settings ( ) are used to produce a correlated key , while those associated to non - identical measurement settings ( ) are used to check the violation of bell inequality ( and the security of the key).,width=302 ]the motivation and concern for the possibility of a biased - sample attack is that avalanche photodiodes and photomultipliers are fundamentally threshold detectors . at the input, the energy must be higher than the band gap to trigger an avalanche or a photoelectron ; while at the output , the current must be higher than a discriminator value to be counted as a click .this combined threshold could be exploited by eve to obtain an apparent violation of bell inequalities on the detected sample .we will assume throughout this paper that the source is controlled by eve , that she can produce pulses that split classically according to malus law in polarizing beamsplitters , and that these pulses are sensitive to the threshold in alice s and bob s detector .how eve will effectively produce such pulses is left to her , but it should be stressed that if each pulse contains at most one particle then the biased sampling described here would be ineffective , because the energy seen at a detector would always be the same regardless of the measurement settings .eve could for instance produce pulses with several photons of lower frequencies , possibly using non - linearities in threshold detectors .we will consider here simple models of threshold detectors : ideal threshold detectors , which produce a click with certainty if the energy of the absorbed pulse is greater than a threshold ; and linear threshold detector , which produce a click with a probability increasing linearly with the energy above a threshold ( possibly with a saturation value after which the probability no longer increases ) .the simplest way for eve to obtain an apparent violation of bell inequalities reproducing exactly the predictions of quantum mechanics on the detected sample is to aim at reproducing the asymmetrical detection pattern of a larsson - gisin model .those models are _ ad hoc _ , but it is in fact relatively straightforward for eve to obtain these patterns with classical pulses and threshold detectors . for this purpose , eve sends pairs of correlated pulses with energy and polarization , where is a random variable uniformly distributed on the interval $ ] .then she just needs to make sure that on one side ( say , alice ) the detectors are ideal threshold detectors with , while the other side ( bob ) the detectors are linear threshold detectors , also with . with the condition , malus law is cut by the bottom precisely at the intersection of the two channels ( at ) .consequently , alice s always records a click in exactly one channel : channel if , channel otherwise ; whereas on bob s side the probability to get a click in the channel varies with when , and with when in channel .the crucial feature of the resulting detection pattern is that the probability to obtain a click in either channel on bob s side depends explicitly on : it is maximum for , and decreases down to zero for .the sampling is thus unfair , or biased , and leads to an apparent violation of bell inequalities on the detected sample .an eavesdropping strategy would therefore consist in replacing the source of entangled photons with a classical source of pulsed pairs correlated in polarization and designed to meet condition .if eve aims at reproducing the full correlation function as predicted by quantum mechanics , she would have to make sure that at one station ( say , alice ) the threshold detectors react ideally to the pulses , whereas at the other station ( bob ) the threshold detectors react linearly .however , if alice and bob are only measuring a few points of the correlation function ( those giving maximum violation of bell inequality ) , as is done in ekert protocol , eve can lift this constraint and work with identical threshold detectors on both sides ( either linear or ideal ) .alice and bob would then observe a maximal violation of bell inequalities on the subset of detected pairs , and would thus wrongly believe that their key is secure while in fact eve s knowledge would in principle be maximum .in order to prevent eve from using this attack , the obvious solution consists in increasing the efficiency to reach 83% .however , this proves difficult with threshold detectors . decreasing the band gap threshold or increasing the operating temperature does increase the efficiency of the detectors , but only at the cost of higher dark count rates .unless special detectors operating near absolute zero temperature are used , such as transition - edge sensors ( which are too cumbersome and slow to be practical solution to qkd ) , this can be considered a general rule that applies to any detectors , and fundamentally limits their efficiencies .another suggestion is to artificially complete the detected sample by randomly assigning 0 or 1 to non - detected pulses , so that the required efficiency to produce a useful key is lowered to .however , we would like to argue that the drawback of this method is that introducing some random results would be bound to decrease the violation of bell inequality measured on the completed sample , thus preventing a security check of the key unless one does so on the uncompleted sample ( which would again reintroduce the efficiency bound ) .our proposal consists in testing the fairness of the sample by analyzing the output channels of the polarizing beamsplitters , instead of simply feeding detectors with them .we keep the standard design of ekert protocol , with two polarizing beamsplitters on each side ( alice and bob ) projecting the incoming pulses on random bases and , as depicted on fig .[ fig : epr_setup10 ] , but we replace each detectors by a _ polarimeter _ : a polarizing beamsplitter followed by a detector at each output .consider alice s side ( see fig . [fig : fstest10 ] ) .we label the polarimeter in channel as , the orientation of its polarizing beam - splitter as , and the detectors in the transmitted and reflected output as and respectively .similarly , the polarimeter in channel is labeled , the orientation of its polarizing beam - splitter , and the detectors in the transmitted and reflected output are and respectively .bob would proceed similarly with two polarimeters labeled and .fair sampling test on alice s side .the detector in channel is replaced by a polarimeter with two detectors and having the same efficiency .the detector in channel is replaced by a polarimeter with two detectors and , also with efficiency .ekert protocol is thus unaltered by our test : polarimeter is equivalent to the detector in channel in fig .[ fig : epr_setup10 ] , with the same efficiency , and polarimeter is equivalent to the detector in channel with efficiency .similar results would be obtained for polarimeter , and for bob s polarimeters.,width=302 ] in case of a genuine source of entangled photons , nothing is changed for ekert protocol , as long as all the detectors have the same efficiency .each polarimeter can then be considered as one detector with quantum efficiency .the polarimeter can be seen as one single detector in channel 1 , in which the orientation has no influence on the result : a photon exiting the polarizing beam - splitter through channel will be detected in either output channel of polarimeter with a probability .similarly , polarimeter can be seen as one single detector in channel , where the orientation plays no role whatsoever , and the same goes for bob s setup .the production of the key and the verification of the violation of bell inequalities is thus unaltered by our fair sampling test setup in case of a genuine source of entangled photons , because the additional measurement settings , , and controlled by alice and bob have no influence on the measurement results in case of a genuine source of entangled photons .however , they have a strong influence in the case of a biased - sample attack by eve . let us consider the simpler case of ideal threshold detectors mentioned above. by malus law , the energy of the pulse reaching alice s detector is starting from a uniform distribution of the polarization of pulses on the circle , we write that , so that the probability to get an energy between and in the channel is given by where is the maximum energy reaching the detector ( by malus law ) .the probability to obtain a click in an ideal threshold detector placed at the transmitted output ( ) of polarimeter is then simply the integral of this density distribution over the energy reaching the detector , from the threshold to : similarly the probability to obtain a click in an ideal threshold detector positioned at the reflected output ( ) of polarimeter is in the case of linear threshold detectors , the analytical results are more complicated since the probability to get a click for an energy is not always equal to 1 , but the principle of calculation remains the same : integrate the product of the probability density distribution by the probability of obtaining a click for a given energy .the analytical results for linear threshold are qualitatively similar to that of ideal threshold detectors . the results in the case is leading to a violation of bell inequalities exactly reproducing the predictions of quantum mechanics are displayed in fig .[ fig : fulltestresult ] : the probability to get a click in polarimeter depends on .it is maximum for , and reaches zero for .similar results would be obtained for polarimeter , and for bob s polarimeters . , with .the probability to get a click in depends on . from to and from to , only detector can click , whereas from to only detector can click .by contrast , in case of a genuine source of entangled photons , the probability to get a click in these channels is governed by malus law s and ( not shown here ) , and the probability to get a click in either channel therefore adds up to a constant independent of .,width=340 ] this fair sampling test can be implemented very simply on alice s side by fixing .the random switching in ekert protocol ( fig .[ fig : epr_setup10 ] and fig .[ fig : fstest10 ] ) ensures that the points at and are both scanned automatically . any significant difference in the number of single counts recorded when and would betray eve s attempt to bias the sample through a biased - sample attack on the threshold detectors .similarly , bob would chose , and compare the number of singles when and .our fair sampling test can be implemented during the production of the key and together with the violation of bell inequality check , so that it seems hard to bypass it without reducing the visibility of the correlation .for instance , increasing the energy of the pulses with respect to the threshold would tend to reduce the dip in the fair sampling test , but it would give rise to double counts and reduce the visibility of the correlation at the same time ( weaker violation of bell inequalities ) .the combination of a bell inequality test with a monitoring of the double counts and our local fair sampling test therefore constitutes a solid scheme against eavesdropping a e91 protocol using a biased - sample attack .it should also be noted that the use of four detectors on each side can serve other purposes , like shielding alice and bob from a time - shift attack . in principle ,similar fair sampling tests could be implemented in other qkd protocol , by replacing passive detectors in each channel by a device with the same efficiency that would analyze further whichever degree of freedom is used to encode the key , instead of simply feeding detectors with it .we are grateful to hoi - kwong lo , jan - ke larsson , takashi matsuoka and masanori ohya for useful discussions on quantum key distribution . 5 a.k .ekert , phys .lett . * 67 * , 661 ( 1991 ) .et al . _ , rev .* 74 * 145 ( 2002 ) .g. jaeger , _ quantum information _ , springer new - york ( 2007 ) .v. scarani _ et al .mod . phys . * 81 * 1301 ( 2009 ) .lo and y. zhao , arxiv.org:0803.2507 ( 2008 ) .d. stucki _ et al ._ , j. mod . opt . * 48 * 1967 ( 2001 ) .p. pearle , phys .d * 2 * 1418 ( 1970 ) .a. garg and n. d. mermin , phys .d * 35 * 3831 ( 1987 ) .eberhard , phys .rev . a * 47 * r747 ( 1993 ) .larsson , phys .a * 57 * 3304 ( 1998 ) .n. gisin and b. gisin , phys .a * 260 * 323 ( 1999 ) .a. ekert , physics world , article - id 5969474 , ( 2009 ) .y. zhao , c.h.f .fung , b. qi , c. chen , h.k .lo , phys .a * 78 * 042333 ( 2008 ) .knoll , _ radiation detection and measurement _ , wiley & sons ( 1999 ) .g. adenier , aip conf . proc .* 1101 * 8 ( 2009 ) .resch , j.s .lundeen and a.m. steinberg , phys .a * 63 * 020102(r ) ( 2001 ) .larsson , phys .a * 256 * 242 ( 1999 ) .et al . _ , arxiv.org:0812.4301 ( 2008 ) .a. aspect , phd thesis no 2674 , universit paris - sud , centre dorsay ( 1983 ) .et al . _ ,arxiv.org:1002.1237 ( 2010 ) .
we propose a local scheme to enhance the security of quantum key distribution in ekert protocol ( e91 ) . our proposal is a fair sampling test meant to detect an eavesdropping attempt that would use a biased sample to mimic an apparent violation of bell inequalities . the test is local and non disruptive : it can be unilaterally performed at any time by either alice or bob during the production of the key , and together with the bell inequality test .
inspired by the work of hrau and pravda - starov on the global hypoellipticity of landau - type operator , we study in this paper the hypoellipticity of a linear model of spatially inhomogeneous boltzmann equation without angular cutoff , which takes the following form : where the coefficients , are smooth _ real - valued _ functions of the velocity variable with the properties subsequently listed below .there exist a number and a constant such that for all we have and where and throughout the paper we use the notation .the notation in ( [ 11051116 ] ) stands for the fourier multiplier of symbol with ) ] stands for the commutator between and defined by =ab - ba ] such that in ] . [ 110503 ] let be given in ( [ 11051401 ] ) with , satisfying the assumptions ( [ assumption1 ] ) and ( [ assumption2 ] ) , and let be defined in ( [ 11052716 ] ) .then for all we have } f,~ { \left < v\right>}^{\gamma}m_{\varepsilon}^{s } f\bigr)}_{l^2}\right\vert } \lesssim ~ { \varepsilon}{\big\vert{\left < v\right>}^{\gamma}{\left < d_\eta\right>}^{2s}f\big\vert}_{l^2}^2 + c_{{\varepsilon}}{\left({\big\vert{\left < d_\eta\right>}^s { \left < v\right>}^{s+\gamma}f\big\vert}_{l^2}^2 + { \big\vert f\big\vert}_{l^2}^2\right)}. \end{aligned}\ ] ] in order to prove the above results we need some lemmas . [ lemm110527 ] let and be given in ( [ 11052701 ] ) and ( [ 11052716 ] ). then and , uniformly with respect to and .moreover for any , there exists a constant , depending only on and , such that and it is just a straightforward verification , since on the support of . the proof is completed .[ lem11050601 ] let be given in ( [ 11052716 ] ) .then for all we have }f,~{\left < v\right>}^{\gamma } m_{\varepsilon}^{s } f\bigr)}_{l^2}\right\vert } \lesssim{ \varepsilon}{\big\vert{\left < v\right>}^{\gamma}{\left < d_\eta\right>}^{2s}f\big\vert}_{l^2}^2 . \end{aligned}\ ] ] observe }= \frac { 1}{2\pi } \big\{\tau+v\cdot\xi,~~\varphi_{\varepsilon}(v,\eta){\left<\eta\right>}^s\big\}^{w}=-\frac { 1}{2\pi}\big ( \xi\cdot\partial_\eta{\left(\varphi_{\varepsilon}{\left<\eta\right>}^s\right)}\big)^{w},\ ] ] where stands for the poisson bracket defined by thus }f,~{\left < v\right>}^{\gamma } m_{\varepsilon}^s f\bigr)}_{l^2 } = -{1\over { 2\pi}}{\bigl ( m_{\varepsilon}^s { \left < v\right>}^{\gamma}\big ( \xi\cdot\partial_\eta{\left(\varphi_{\varepsilon}{\left<\eta\right>}^s\right)}\big)^{w } f , ~f\bigr)}_{l^2}. \end{aligned}\ ] ] moreover , in view of ( [ 11052702 ] ) and ( [ 11053010 ] ) we have uniformly with respect to and .this implies }f,~{\left < v\right>}^{\gamma } m_{\varepsilon}^s f\bigr)}_{l^2}\right\vert}\lesssim { \varepsilon}{\big\vert{\left < v\right>}^{\gamma } { \left < d_v\right>}^{2s}f\big\vert}_{l^2}^2 , \end{aligned}\ ] ] completing the proof of lemma [ lem11050601 ] . the rest of this subsection is occupied by write }={\bigl[i{\left(t+v\cdot \xi\right)},~ m_{\varepsilon}^{s}\bigr]}+a(v){\bigl [ ( -\tilde\triangle_v)^s , ~ m_{\varepsilon}^{s}\bigr ] } + { \bigl[a , ~ m_{\varepsilon}^{s}\bigr ] } ( -\tilde\triangle_v)^s+{\bigl[b,~ m_{\varepsilon}^{s}\bigr]}.\ ] ] then by ( [ 11051101 ] ) we have }f,~{\left < v\right>}^{\gamma } m_{\varepsilon}^{s } f\bigr)}_{l^2}\right\vert } \lesssim { \varepsilon}{\big\vert{\left < v\right>}^{\gamma}{\left < d_\eta\right>}^{2s}f\big\vert}_{l^2}^2+\sum_{j=1}^3 a_1+a_2+a_3 , \end{aligned}\ ] ] with } f,~{\left < v\right>}^{\gamma } m_{\varepsilon}^{s } f\bigr)}_{l^2}\right\vert},\\ a_2&= & { \left\vert{\bigl({\bigl[a , ~ m_{\varepsilon}^{s}\bigr ] } ( -\tilde\triangle_v)^s f,~{\left < v\right>}^{\gamma } m_{\varepsilon}^{s } f\bigr)}_{l^2}\right\vert},\\ a_3&= & { \left\vert{\bigl({\bigl[b,~ m_{\varepsilon}^{s}\bigr ] } f,~{\left < v\right>}^{\gamma } m_{\varepsilon}^{s } f\bigr)}_{l^2}\right\vert}. \end{aligned}\ ] ] in view of ( [ 11053003 ] ) we see }\in { \rm op } { \left(s{\left({\left < v\right>}^{-1}{\left<\eta\right>}^{3s-1},~{\left\vertdv\right\vert}^2+{\left\vertd\eta\right\vert}^2\right)}\right)},\ ] ] and thus }\in { \rm op } { \left(s{\left({\left < v\right>}^{s+\gamma}{\left<\eta\right>}^{2s},~{\left\vertdv\right\vert}^2+{\left\vertd\eta\right\vert}^2\right)}\right)}\ ] ] due to ( [ assumption1 ] ) and the fact that .this implies similarly , by ( [ assumption2 ] ) and ( [ 11053003 ] ) , we conclude that } \in { \rm op } { \left(s{\left({\left < v\right>}^{s+\gamma},~{\left\vertdv\right\vert}^2+{\left\vertd\eta\right\vert}^2\right)}\right)} ] , and thus by ( [ 11053010 ] ) and ( [ 11060401 ] ) we have }\in { \rm op}{\left ( s{\left({\left < v\right>}^{3s+\frac{3\gamma}{2}-1}{\left<\eta\right>}^{s},~~{\left\vertdv\right\vert}^2+{\left\vertd\eta\right\vert}^2\right)}\right)}.\ ] ] this implies , with sufficiently small , } f , ~{\left < v\right>}^{s+\frac{\gamma}{2}}f\bigr)}_{l^2}\right\vert}&\lesssim & { \big\vert{\left < d_\eta\right>}^s { \left < v\right>}^{s+\gamma}f\big\vert}_{l^2}{\big\vert{\left < v\right>}^{3s+\gamma-1}f\big\vert}_{l^2}\\ & \lesssim & { \varepsilon}{\big\vert{\left < d_\eta\right>}^s { \left < v\right>}^{s+\gamma}f\big\vert}_{l^2}^2+{\varepsilon}{\big\vert{\left < v\right>}^{2s+\gamma}f\big\vert}_{l^2 } + c_{\varepsilon}{\big\vertf\big\vert}_{l^2}^2,\end{aligned}\ ] ] where in the last inequality we used the interpolation inequality due to . combining ( [ 11052910 ] ) we get letting small enough gives the desired estimate ( [ 11052931 ] ) .the proof is complete .we would make use of the multiplier method used in to prove the above result .firstly we need to find a suitable multiplier . in what followslet be fixed , and define a symbol by setting with given by where ) ] and supp ] , and that as a result the conclusion in lemma [ lem110603 ] will follow if we could show that to prove the above inequality we use the estimate in proposition [ prp110511 ] to the function ; this gives that the terms on the left hand side is bounded from above by then from ( [ 11060301 ] ) , it follows that letting small enough gives ( [ 11060315 ] ) .the proof of lemma [ lem110603 ] is thus complete .* acknowledgements * the work was done when the author was a postdoctoral fellow at the laboratoire de mathmatiques jean leray , universit de nantes , and he wishes to thank frdric hrau and xue ping wang for hospitality provided .the author gratefully acknowledges the support from the project nonaa of france ( no .anr-08-blan-0228 - 01 ) , and the nsf of china under grant 11001207 .r. alexandre , y. morimoto , s. ukai , c .- j .xu and t. yang , the boltzmann equation without angular cutoff in the whole space : iii , qualitative properties of solutions , preprint .r. alexandre , y. morimoto , s. ukai , c .- j .xu and t. yang , the boltzmann equation without angular cutoff in the whole space : ii , global existence for hard potentials , preprint .r. alexandre , y. morimoto , s. ukai , c .- j .xu and t. yang , the boltzmann equation without angular cutoff in the whole space : i , global existence for soft potentials , preprint .r. alexandre , m. safadi , littlewood - paley theory and regularity issues in boltzmann homogeneous equations ii .non cutoff case and non maxwellian molecules , _ discrete contin .* 24 * ( 2009 ) , 1 - 11 .
in this paper we study a linear model of spatially inhomogeneous boltzmann equation without angular cutoff . using the multiplier method introduced by f. hrau and k. pravda - starov ( 2011 ) , we establish the optimal global hypoelliptic estimate with weights for the linear model operator .
it is uncontroversial to say that p - values are very widely used in scientific research .for example , six of the twelve research articles and reports in the december 14 2012 issue of _ science _ and 20 out of 22 in the december 2012 issue of _ journal of pharmacology and experimental therapeutics _ use p - values when describing their experimental results , specifying them either exactly or as being less than various thresholds . on the basis of such ubiquityit might be assumed that p - values are useful for scientific inference and that practicing scientists need little explanation of them .however , whether that is the case , or even _ should be _ the case is controversial .significance tests and the p - values that they yield have been under attack both statisticians and non - statisticians since they first became widely used .papers critical of them are so myriad that even a simple listing might be as long as this whole paper .conveniently , many alleged deficiencies can be gleaned without reading beyond some titles : p - values are `` not a useful measure of evidence '' [ 1 ] and may be completely irreconcilable with evidence [ 2 ] .they `` predict the future only vaguely '' [ 3 ] and are `` impossible '' [ 4 ] .they are often confused with error rates [ 5 ] , `` what they are '' is logically flawed and `` what they are not '' is coherent [ 6 ] .significance tests are `` insignificant '' [ 7 ] , non - empirical products of sorcery [ 8 ] that have been regularly `` abused and misused '' [ 9 ] .there is at least a `` dirty dozen '' of ways that p - values are regularly misinterpreted and , as `` even statisticians are not immune '' to those misinterpretations [ 10 ] , `` you probably do nt know p '' [ 11 ] . the continued use of significance tests is a `` pervasive problem '' [ 12 ] because they answer a question that no - one means to ask [ 13 ] .the previous paragraph list an apparently damning set of shortcomings that , if true and relevant , would mean that continued use of p - values for statistical support of scientific inference should not be allowed . from a practical point of view , therefore , a very important question is whether scientists choose to use significance tests and p - values in making inference because of a mistaken assumption that they have useful properties , or because they do actually have useful properties . to decide that question it is necessary to characterize those properties .a significance test is not a hypothesis test [ 11 ] .that will will be self - evident to many readers , but not all .consider the likely responses by non - statistically sophisticated users of statistics to this question : which of those two types of procedure is referred to by the common phrase ` null hypothesis significance test ' ? a significance test yields a p - value whereas a hypothesis test yields a decision about acceptance of the null hypothesis or an alternative hypothesis .frameworks exist that attempt to amalgamate significance and hypothesis tests [ e.g. 14 ] or to append desirable inferential aspects of significance testing onto hypothesis testing [ e.g. 15 , 16 ] but those frameworks are controversial and have no been widely adopted .nonetheless there is mixed approach in very widespread use .unfortunately it is not an intentional mixture but an accidental hybrid that has been called a mishmash [ 17 , 18 ] , and it is a dysfunctional mishmash because the two approaches are incompatible [ 19 , 11 , 20 ] . the phrase ` null hypothesis significance test ' should be avoided because it is confusing and , arguably , is itself a product of confusion .an essential role for p - values is a core difference between significance tests and hypothesis tests .p - values are conventionally defined with reference to the null hypothesis . for example , the author recently defined it in this way : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ to be specific , a p - value obtained from an experiment represents the long - run frequency of obtaining data as extreme as the observed data , or more extreme , given that the null hypothesis is true .[ 11 p. 1560 ] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _the other common style of definition specifies tail areas under sampling distributions , which amounts to the same thing .however , judging from the obvious confusion in many publications regarding the properties of p - values , neither style of definition serves well as an explanation .the introductory listing of alleged shortcomings of p - values may give the impression that confusion about p - values takes many forms but , while that may be true to a degree , one form of confusion leads more or less directly to the others .that primary confusion is that p - values measure error rates .the idea that p - values measure type i error rates is as pervasive as it is erroneous , and it comes hand in hand with the significance test - hypothesis test hybrid .it might be seen as a natural extension or corollary of the p - value definition quoted above and , given that many introductory level textbooks actually introduce p - values within the hybrid framework , such a misunderstanding is itself understandable .however , even though deficiencies of textbooks in that regard have been noted many times [ e.g. 18 , 21 , 5 , 11 ] and sometimes analyzed in depth [ 19 , 22 , 23 ] , textbooks are not entirely to blame .it can reasonably be said that r.a .fisher himself was a contributor to the adoption of the hybrid approach .his writings are often difficult to fathom , his approach to argument was often to ` play the man rather than the ball ' , and even while promoting p - values as indices of evidence against the null hypotheses he advised : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ it is usual and convenient for experimenters to take 5 per cent as a standard level of significance , in the sense they are prepared to ignore all results which fail to reach this standard _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the dichotomization implied by that statement gives the impression that p - values fit into the error - decision framework of neyman and pearson . in same vein , neyman and pearson may also have contributed to the hybridization .they wrote : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we may accept or reject a hypothesis with varying degrees of confidence ; or we may decide to remain in doubt ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ that statement appears to make space for experimental conclusions other than the all - or - none decisions usually associated with their approach , but it is only an informal space : the mathematical aspects of their work leave no room for a decision to ` remain in doubt ' .real problems arise with the hybridized approach because p - values and the error rates of the neyman pearsonian error decision framework are quite different .the error rates come not from the statistics _ per se _ , but from the behavior of the experimenter upon seeing the statistics what neyman eventually called inductive behaviour [ 26 ] .however , the rarity of specified alternative hypotheses and sample size calculations in scientific research publications [ 27 ] and the multiple levels of in statements of p without specific justification in terms of power and error tolerance make it clear that few scientists actually practice inductive behaviour .a likely reason for the non - adoption of inductive behaviour is that it is incompatible with many scientific activities , as can be gleaned from this oft - quoted passage from neyman and pearson s original publication of their framework : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we are inclined to think that as far as a particular hypothesis is concerned , no test based upon the theory of probability can by itself provide any valuable evidence of the truth or falsehood of that hypothesis .but we may look at the purpose of tests from another view - point . without hoping to know whether each separate hypothesis is true or false, we may search for rules to govern our behaviour with regard to them , in following which we insure that , in the long run of experience , we shall not be too often wrong . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in the first sentence neyman and pearson opine openly and explicitly that the results of a particular experiment can not be used to discern the truth of the ` particular hypothesis ' of that experiment .that means that , within that framework , experimental results can not be used as evidence for or against statements regarding the state of the world within that experiment. the quoted passage has been widely reproduced , but its implication for scientific evidence seems to be rarely enunciated .perhaps its discordance with real scientific inference is so extreme that few who read that passage can believe that they have grasped its true meaning .certainly it is difficult to accept the consequences of the passage , for how could the result of an experiment fail to tell the experimenter about the local state of the world ?the answer is that it can do so when the experimenter is required to ignore the evidence in the results and to focus instead on the long - term error rates that would attend various behaviours .the long - run error rates associated with an experiment are a property of the experimental design and the behaviour of the experimenter rather than of the data .the ` size ' of the experiment , , is properly set before the data is available , so it can not be data - dependent .in contrast , the p - value from a significance test is determined by the data rather than the arbitrary setting of a threshold .it can not logically be an error rate because it does nt force a decision in the way that inductive behaviour does , and if a decision is made to discard the null hypothesis when a small ( presumably ) p - value is observed , the decision is made on the basis of the smallness of the p - value in conjunction with whatever information that the experimenter considers relevant .thus the rate of erroneous inferences is a function of not only the p - value but the quality and availability of additional information and , sometimes , the intuition of the experimenter .p - values are not error rates , whether ` observed ' , ` obtained ' or ` implied ' .so , what exactly is a p - value and how should it be interpreted ?fisher regarded it as an indicator of the strength of evidence against a null hypothesis . the link between p - values andevidence is strong , but it is somewhat indirect and , as previously noted , that link is sometimes disputed or disparaged .fisher justified the use of p - values for inductive inference by noting that a small observed p - value indicates that either an unusual event has occurred or the null hypothesis is false .that is sometimes called fisher s disjunction and it implies that an experiment casts doubt on the null hypothesis in some sort of proportion to the smallness of the observed p - value . obviously , for practical application , it is desirable to be able to specify the relationship between p - values and evidence more completely than a vague phrase like ` in some sort of proportion ' and the next section of this paper explores that relationship empirically .it is hoped that full documentation of those properties will not only help to reduce the misapprehension that p - values are error rates , but also encourage a more thoughtful approach to the evaluation of experimental results .i take as a starting point the fact that p - values are data - dependent random variables [ 31 ] and the view that likelihood functions encapsulate the evidential aspects of data [ 28 - 30 ] .it is not intended to discard the conventional definitions of p - values , but to gain insights into p - value - ness perhaps different from those afforded by those definitions through empirical exploration .the working definition of a p - value in this section is ` that value returned by the r function t.test ' .student s _t_-test for independent samples was chosen as the exemplar significance test for its ubiquity , and while it is possible that some of the properties documented will be specific to that test , most will certainly be general .simple monte carlo simulations allow the exploration of p - values .figure 1 shows the two - tailed p - values resulting from one million student s _t_-tests for independent groups of ( i.e. 18 degrees of freedom ) with the true difference between the groups being a uniform random deviate in the range to times the population standard deviation .the resulting cloud of p - values has mirror - symmetry around effect size of zero , with the p - values increasingly clustered towards the x - axis as the absolute effect size increases .( it is notable , and perhaps useful for pedagogic purposes , that there is no sudden change in the distribution of p - values at any level and so there is not a ` natural ' place to set a threshold for dichotomizing the p - values into significant and not significant . )that cloud of p - values is informative only in a qualitative fashion , but the distribution of p - values can easily be quantified using the probability density functions obtained as vertical density sections through the cloud ( figure 2 ) .probability density functions of p - values are rarely seen , with the consequence that many users of statistical tests are unfamiliar with the patterns of p - values that can be expected in various experimental circumstances [ 3 , 32 ] .those patterns show that where the null hypothesis is false , a small p - value is more often observed than a large p - value and anything that would increase the power of an experiment ( e.g. larger effect size or sample size ) leads to the p - value probability density function being increasingly piled up at the left hand end of the graph . per group ) .data are the results from monte carlo simulations where the true effect size was a uniform random variate between -4 and 4 times the population standard deviation . ]p - value probability density functions , like those in figure 2 , have utility in illustrating the properties of p - values and , while they will be unsurprising to many , they do have some pedagogical utility particularly when combined with the median p - values to provide an alternative to the conventional power calculations that are part of the neyman pearson framework [ 31 , 33 ] .however , they are not relevant to interpretation of an observed p - value that is already in hand because each distribution is specific to an effect size , and it is usually the size of the effect , or its existence , that is the subject of the experiment . thus p - value probability density functions , like power calculations [ 34 ] , can be useful in the planning stage but have at most tangential relevance to interpretation of the actual results of an experiment .a different picture emerges if , instead of vertical sections through the p - value cloud of figure 1 , horizontal sections are taken at the level of the observed p - value .the horizontal sections tell us how likely any effect size is to yield the observed p - value , information that is directly useful for interpreting experimental results .the horizontal sections are likelihood functions .mathematical likelihood is a relatively rare thing in statistics in that it means almost exactly what a non - statistician would expect .the simplicity of likelihood can be seen in figure 1 : for any observed p - value , the real effect size is more likely to have corresponded to the darker region of the graph than the lighter regions . however , despite its apparent simplicity , likelihood is not prominently featured in most introductory statistics textbooks , and it is often completely absent .it is not well enough known that i can proceed without some definition and explanation .the first thing to note is that likelihoods relate to hypothesised parameter values rather than the observations so it is correct to speak of the likelihood of the effect sizes in figure 1 rather than the likelihood of the observed p - values .the likelihood of a particular hypothesised value of the effect size , say , is proportional to the probability of the observation under the assumption that that true effect size is .if we use to stand for the set of all possible values of and say that the observation is a p - value of 0.01 , then the likelihood function of is the reason that likelihood can be defined only up to a proportionality constant is that the probability of an observation is affected by the precision of the observation .for example , the probability of observing a p - value that rounds to 0.01 will necessarily be higher than the probability of observing p=0.010 , or p=0.0100 and so on . in some circumstancesthe existence of an unknown proportionality constant limits the utility of likelihood for comparisons between disparate systems which would usually have different constants , but in the vast majority of cases it is only necessary to compare likelihoods within a single likelihood function and so there is a single shared constant which can be cancelled out in a ratio of likelihoods .likelihoods can be used is as measures of ` support ' provided by the observed data for hypothesised parameter values .it probably seems natural to most readers that a small observed p - value would support a hypothesis of effect size greater than zero more strongly than it would support the hypothesis of effect size equals zero it is the ratio of the likelihoods that provides a scaling of the relative support .thus the likelihood function shows that hypothesized effect sizes corresponding to the darker regions are better supported by the observed p - value than those corresponding to the lighter regions . as royall puts it, `` the law of likelihood asserts that the hypothesis that is better supported is the one that did a better job of predicting what happened '' [ 35 ] .the law of likelihood says that an observation gives support for a hypothesis that predicts with probability over a rival hypothesis that predicts with probability to the degree . to connect that to figure 1 ,consider that we have a continuum of hypotheses , each proposing a different effect size .the likelihood that any particular effect size is equal to the true effect size is directly related to the blackness of the horizontal slice of figure 1 corresponding to the observed p - value , and the relative support for any two hypotheses is proportional to the ratio of their likelihoods .calculation of the relevant likelihood functions can be achieved using the non - central student s _ t _ distribution , but in the context of an exploration of p - values it is more instructive to calculate them via power curves as functions of effect size .let be the set of values that can be taken by the effect size , , then for a particular sample size we can define the power curve as a function of effect size : the specification that is only necessary because power is undefined where the null hypothesis is true and so , as we are not concerned with error rates , it can be omitted . after rewritingthe power curve as an integral our route to the likelihood function becomes clear . the likelihood curve is also a function of : that function is clearly the first derivative of that power curve in equation [ powerfn2 ] with respect to , which is itself on the scale of p : that likelihood function can conveniently be calculated using the power.t.test ( ) function of r using the code in the appendix .figure 3 shows how that blackness varies with the effect size at the level of p=0.01 , along with the corresponding likelihood function .interpretation of the likelihood function in figure 3 is quite straightforward : the higher the likelihood at a given effect size , the higher the support given by the observed p - value for that effect size .thus the likelihood function shows the support of any value of relative to any other .for example , an observed p - value of 0.01 supports the effect size of about 17 times more strongly than it supports .the support quantified by the function relates not only to the null hypothesis , so it simultaneously shows that the observed p - value of 0.01 supports an effect size of about six times more than it supports = 0 , but nearly three times less than it supports .those interpretations of the likelihood functions are both convenient and easy , but there appears to be a large fly in this ointment : the likelihood function in figure 3 suggests that negative and positive values of are supported equally well , so if the only thing known about an experimental result was the p - value then likelihood functions like that in figure 3 would yield ambiguous results . in a real experimentthe two - tailed p - value is always accompanied by knowledge of the direction of the observed effect , so no investigator is going to credit positive and negative effects equally , but that problem does indicate that the likelihood function from two - tailed p - values is not ideally suited to quantification of experimental evidence .we do not have to give up on likelihood functions , though , because the problem lies in the use of effect - direction - agnostic two - tailed p - values .if we utilize one - tailed p - values we end up with an unambiguous likelihood function .one - tailed p - values from a student s _ t_-test are not , as sometimes asserted , half of the equivalent two - tailed p - value if that was the case then a one - tailed p - value could never be larger than 0.5 .instead , they depend on the sign of the test statistic .t _ is positive then the one - tailed p - values are , indeed , half of the two - tailed , but when _ t _ is negative the one - tailed p - value is 1 minus half of the two - tailed p - value . figure 4 shows the distribution of one - tailed p - values . per group ) .data are the results from monte carlo simulations where the true effect size was a uniform random variate between -4 and 4 times the population standard deviation . in the panel on the right a non - linear vertical axisis used for improved clarity . ] as before , the probability density functions for one - tailed p - values and likelihood functions can be visualized as vertical and horizontal sections through the cloud of p - values in figure 4 .the p - value probability density functions are heavy at the left or right hand end , depending on the sign of the observed effect ( figure 5 ) , and the likelihood functions are unimodal ( figure 6 ) .if it is intended to use a p - value as the sole index of the evidence provided by the observed data , then the fact that two - tailed p - values imply bimodal likelihood functions is a serious problem an index of evidence that points in both directions simultaneously is deficient in specificity .thus , a conclusion that follows from these investigations is that one - tailed p - values are better evidential indices than are two - tailed p - values , and one - tailed p - values should be used whenever an experimenter intends to use them as an indices of evidence .one - tailed p - values are controversial , so arguments for and against their use will be discussed before returning to the general issue of p - values and experimental evidence .while statistical textbooks are not always helpful regarding the choice between one- and two - tailed testing [ 36 ] , there are many commentary and review papers extolling the virtues of one - tailed p - values [ e.g. 37 , 38 , 39 ] . however , that is far from a consensus position because an even larger set of papers argue that one - tailed tests should _ almost never _be used [ e.g. 36 , 40 , 41 ] .the tone of the discussion can be seen in a short note by the famous psychologist h.j .eysenck that appeared to conclude a long - running debate about the merits of using one - tailed p - values in psychological research : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a statement of one - tailed probability is not a statement of fact , but of opinion , and should not be offered instead of , but only in addition to , the factual two - tailed probability ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ however , users of one - tailed p - values can take support from no less an authority than ` student ' ( w.s .gossett ) who used one - tailed p - value to conclude that _ l_-hyoscyamine bromide was a soporific in the original description of his eponymous _ t_-test [ 43 ] .it is notable that virtually all papers discussing the appropriateness of one - tailed p - values do so implicitly or explicitly from within the neyman pearson paradigm .for example , freedman is explicit : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ this article is concerned only with the hypothesis - testing paradigm ; p - values are used in this article only as an operational means for deciding whether or not to reject the null hypothesis and not as measures of evidence against the null hypothesis . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ however , many other papers expounding about the number of tails that should adorn a p - value are silent about whether they are considering significance tests and hypothesis tests .that is not an idle observation , because conclusions about the suitability of one- and two - tailed p - values for statistical inference will necessarily be influenced by the framework within which the issues are considered . in this sectionit will be shown that arguments about the applicability of one - tailed p - values to inductive behaviour within an error - decision framework can be irrelevant to the utility of those p - values as evidence .an important argument against the use of one - tailed tests is that they force the experimenter who observes an effect in the unexpected direction to choose between ignoring the effect and ` cheating ' by pretending that the direction was expected all along .the following is on one of the slides that the author has used in teaching the basics of statistical reasoning and testing to young biomedical researchers : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ one - sided tests are always more powerful than two - sided , but are you prepared to ignore a result where your drug makes the responses a lot smaller ? tails : just use two ! _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ( that slide has now been ammended ! )if one - tailed tests imply that one has to ignore effects in an unexpected direction then their use would require a `` lofty indifference to experimental surprises '' [ 45 ] .that sounds like a compelling reason to avoid one - sided tests , and it is widely accepted as such .however , it is also unrealistic .lombardi & hurlbert suggest that in practice few , if any , clear effects in an unexpected direction will be equated with no effect : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in every _ d _ [ effect size ] associated with a low p value , regardless of sign , there is a good story .and we have never known a colleague who shirked at its telling .such behaviour is predictable . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ furthermore , such behaviour is desirable where the data are viewed in light of the evidence that they contain . a large effect in the unexpected direction yields a one - tailed p - value that is not large and therefore to be ignored , but extreme and thus noteworthy .a one - tailed p - value of 0.995 is just as extreme as a one - tailed p - value of 0.005 and their likelihood functions are located symmetrically on either side of zero ( or whatever value is chosen as the null hypothesis ) and thus they offer equally strong evidence against the null hypothesis . from the standpoint of evidence ,the issue of ignoring unexpected effects is entirely moot .an experimental result in the unexpected direction might provide unexpected evidence , but it nonetheless provides evidence , and while the predicted - ness of an outcome should properly affect how an experimenter thinks about the result , but it does not affect the evidence . thus the `` lofty indifference '' argument against one - tailed p - values disappears as soon as one discards the notion that p - values are something to do with error rates .other reasons for not preferring one - tailed p - values include the notion that adoption of one - tailed p - values entails a lowering of the standards for publication of results , with an attendant risk of more papers containing unreproducible results and erroneous conclusions .it is true that one - tailed p - value of 0.05 requires less extreme results than a two - tailed p - value of 0.05the data that gave that one - tailed result would give a one - tailed p - value of 0.1but the simple interconvertability between one- and two - tailed p - values leads others to argue that the choice of tails is unimportant , as long as their number is clearly specified and the exact p - value given .some who might forbid the use of one - tailed p - values may not be mollified by that argument because their concerns often relate not to the p - values _per se _ , but to the assumed tendency for attainment of ` statistical significance ' to be taken as sufficient reason to ` believe ' a result ( or , at least , to publish it ) . in light of that assumption , to license the use of one - tailed p - value would be tantamount to a halving of the protection against false positive claims .however it is the tendency to uncritically believe that a ` significant ' result is ` real ' ( or publishable ) that is at the heart of this problem .the credibility gained from the words ` statistically significant ' is at fault rather than the nature of one - tailed p - values .if we wish to reduce publication of unreliable inferences based on weak evidence then we should assess the evidence rather than relying on mindless protection by a rickety hurdle of error rate - related ` significance ' [ 11 , 46 ] .so , how many tails do i recommend scientists use ?the short answer is one , but the longer answer will make it clear that the number of tails does nt much matter .all arguments for two tails that are only relevant to the neyman pearsonian error - decision framework can safely be ignored when working within an evidential framework , but , at the moment , statistically non - sophisticated readers will not understand that a one - tailed p = 0.99 offers as much evidence for a non - zero effect as p = 0.01 , and so there is a substantial risk of misinterpretation associated with one - tailed p - values for unexpected observations .conveniently , that issue can be circumvented by using two - tailed p - values whenever it is convenient or desirable , as long as the number of tails is clearly indicated .those two - tailed values can be converted into one - tailed values of either tail by anyone who cares to do so .further , because the relationship between a p - value and a likelihood function is one value to one continuous function , it is a relationship of specification rather than measurement we could simply choose to let a two - tailed p - value stand as specification of the likelihood function of the relevant one - tailed p - value if that would help .that way we can have our cake and eat it too : it does nt matter how many tails one uses as long as the effect direction and the number of tails are specified to the reader for every p - value reported .there are good reasons to accept fisher s notion that p - values are a summary of the evidential meaning of experimental data analyzed by a significance test .however , so many arguments have been put forward that p - values should not be interpreted in that manner that some discussion is needed .the arguments directly addressed are listed here : * p - values do not offer a consistent scaling of evidence .( p - values are affected by sample size ; p - values are affected by stopping rules . ) * p - values are the product of a logically flawed approach .( fisher s disjunction is false ; p - values depend on data that were not observed ; the null hypothesis is often known to be false before the experiment . ) * p - values overstate the strength of evidence .( p is smaller than another statistical variable ; a large enough sample will always yield a small p - value ) * p - values conflict with the likelihood principle . for p - values to be used as indices or summaries of the evidential meaning of experimental data, it is desirable that they be readily interpreted .if there was a simple and consistent relationship between the numerical value of p and evidence then interpretation of p would be trivial , but the relationship is neither simple nor consistent .however , as will be shown below , the complexity comes from the nature of evidence rather than from the nature of p - values .a common criticism of p - values as indices of evidence is that their evidential meaning varies with sample size [ 48 ] , a criticism that boils down to the fact that it is difficult to answer questions of this form : `` is a result of p = 0.025 from the same strength of evidence against the null hypothesis as a result of p = 0.025 from ? '' to answer that question empirically , sets of 100,000 one - sided p - values were generated from student s _t_-test for independent samples by monte carlo simulation , as before , with sample sizes of and 100 .the results show that as increases , the cloud of p - values becomes narrower and steeper ( figure 7 ) with the consequence that , for any given p - value , the likelihood function gets narrower and closer to the null hypothesis effect size ( figure 8) .thus it is true to say that the strength of evidence summarized by a p - values varies with sample size , but that does not mean that the p - value is not a useful index , or summary , of evidence it simply means that a p - value should always be accompanied by the sample size .the p - value and sample size together correspond to a unique likelihood function , and thus act as a summary of that function and the evidence quantified by that function .monte carlo simulations where the true effect size was a uniform random variate between -4 and 4 times the population standard deviation .a non - linear vertical axis is used for clarity . ]the difficulty here is not really in the meaning of the p - value , but in equating the evidential meaning of the same p - value obtained from different sample sizes .the question posed in the previous paragraph only asks about evidence relative to the null hypothesis , and is restricted to the dimension ` strength ' . while that question sounds reasonable given that the evidential nature of a p - value is usually described as being ` the strength of evidence against the null hypothesis ' ,it is actually ill - posed .evidence encoded by a p - value and its corresponding likelihood function is not applicable only to the null hypothesis , and it is not fully specified by strength .it is better to think of it as having ( at least ) the two dimensions , strength and specificity , with strength relating to the height of the likelihood function and the specificity to the width and location of that function . in that way, significance testing can be thought of as being a process for estimation of the parameter on a continuous scale , rather than as a decision process for dichotomously choosing between hypotheses .it is reassuring to find myself in agreement with both e.t .jaynes and r.a . fisher .jaynes , a leading proponent of bayesian approaches and no friend of significance testing , said that _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the distinction between significance testing and estimation is artificial and of doubtful value in statistics _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ and fisher said : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ it may be added that in the theory of estimation we consider a continuum of hypotheses each eligible as null hypothesis , and it is the aggregate of frequencies calculated from each possibility in turn as true [ ] which supply the likelihood function _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ focusing on solely the strength of the evidence is insufficient because while the evidence may be _ against _ the null hypothesis , it is also _ in favor _ of parameter values near to the observed estimate. the ill - posed question asked above can be re - posed in a form that is compatible with estimation and the multiple dimensions of evidence : what are the evidential meanings of observed p = 0.025 from samples of and ? after reference to figure 8 , that question can be usefully answered like this : the result indicates that the true effect size is unlikely to be as low as zero , it is very likely to be less than 1 , and can be expected to be quite close to the observed effect size ; the result is evidence that the true effect size is unlikely to be as low as zero but might be quite different from the observed effect size .the general features for interpretation of p - values that can be gleaned from those examples will not surprise anyone who has experience with p - values : smaller p - values offer stronger evidence against the null hypothesis .the meaning of a p - value is affected by the sample size , exactly as any index of evidence _ should _ be : larger sample sizes provide higher specificity from a more reliable estimate of the true effect size . another important criticism of p - values as indices of evidence is the notion that p - values are affected by stopping rules , and thus by the experimenter s intentions [ e.g. 4 ] .however , that seems only to be the case when the method of calculation of the p - values is adjusted to account for the stopping rules .such adjustments typically ensure a uniform distribution of p under the null hypothesis , a distribution that is often assumed or implied , and sometimes stated , to be required for a p - value to be valid .however , the only reason that p - values would need to be uniformly distributed under the null hypothesis is to allow them to comply with the _ frequentist _ or _ repeated sampling principle _ : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in repeated practical use of a statistical procedure , the long run average error should not be greater than ( and ideally should equal ) the long - run reported error ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ however p - values are not error rates .they are outside the scope of that principle and so any criticism of the utility of p - values based on non - compliance with the frequentist principle entirely moot .i assert that the evidential meaning of p - values are immune to the influence of stopping rules , just as likelihood functions are .that assertion is likely to elicit strong disagreement from some readers , but its truth can be demonstrated empirically by simulations with non - standard stopping rules . to that end, experiments were run with sample size = 5 and a student s _ t_-test performed . in runs where the observed p - value was less than 0.05 , or greater than 0.15the p - value was accepted and the experiment stopped , but in all runs where the original p - value was between 0.05 and 0.15 an extra 5 observations was added to each group and a new p - value calculated . that two - stage stopping rule contains an interim analysis ( ` data peeking ' ) , and thus has an informal sequential design .( it is worth noting , as an aside , that such features are thought to be quite common , but usually occult , in some basic biomedical research areas .` abuses ' of statistical processes are only to be expected in the circumstance where the importance of stopping rules to the resulting error rates from neyman pearson hypothesis testing is rarely presented in introductory statistics textbooks , and where the description of experimental protocols in research publications have become extremely abbreviated and formulaic . ) ( curve ) and the frequency of occurrence of p - values in the range of 0.0049750.00525 at all effect sizes in the simulations ( histogram ) .( compare the right panel with figure 6 . ) ] the results of those simulations with the null hypothesis true show a non - uniform distribution of p - values calculated without ` correction ' for the two - stage stopping rule , as expected ( figure 9 ) .that non - uniformity does mean that those p - values can not be correctly interpreted as the probability under the null hypothesis of obtaining a result at least as discrepant as that observed the probability of obtaining a p - value less than , for example , 0.15 was substantially lower than 0.15but that does not mean that the evidential meaning of the p - values is changed . if that were the case then we would expect to see a change in the relationship between the observed p - value and the likelihood function , which is well known to be independent of stopping rules .there is no such change , as can be seen in the second panel of figure 9 .the frequency distribution of effect sizes among the simulation runs that went on to yield p = 0.005 at matches the likelihood function for p = 0.005 at from a standard one - stage stopping rule .that result indicates that , despite the non - uniform distribution of p - values under the null hypothesis , the two - stage stopping rule has not affected the distribution of the p - values among the experiments that went to the second stage .( it is not necessary to demonstrate that the two - stage protocol fails to affect the runs that terminated at the first stage , because the mere possibility of extra observations that did nt actually eventuate can not influence their p - values . )thus the relationship between p - values and likelihood functions for these two - stage experiments is exactly the same as it is for conventional fixed sample size stopping rules , and it can be concluded that the evidential meaning of the p - value , like that of the likelihood function , is independent of the stopping rules , as long as the p - value is not ` corrected ' or ` adjusted ' .that result is interesting from a theoretical point of view , but it also has the practical consequence that experiments can be conducted sequentially without the need for complicated and punitive ` corrections ' of the p - values , as long as the p - values are correctly interpreted as a summary of evidence rather than erroneously assumed to be error rates . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the scientist who carefully examines and interprets his observations ( via likelihood functions ) as they arise is behaving appropriately .he is not weakening or damaging the statistical evidence , he is not spending or using up statistical capital , he should not be chastized for peeking at the data , and there is no valid basis for levying a statistical fine on his study ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ arguments that p - values do not offer a consistent scale of evidence are often based on the false premise that evidence is one - dimensional or mistakenly assume that p - values should be ` corrected ' for the stopping rules . an important claim that p - values are logically flawed as indices of evidence comes from a criticism of fisher s disjunction which says that p - values do not cast doubt on the null hypothesis in the manner that fisher suggested .cohen illustrates that claim by drawing an analogy between fisher s disjunction and this syllogism [ 51 ] : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ if a person is an american , then he is probably not a member of congress .+ this person is a member of congress .+ therefore , he is probably not an american . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ as cohen says , the last line of that syllogism about the american is false even though it would be true if the word ` probably ' were omitted from the first and last lines .however , cohen is incorrect in suggesting that it is functionally analogous to fisher s disjunction . as hagen pointed out in a response published a few years after cohen s paper[ 52 ] , the null hypothesis in fisher s disjunction refers to the population , whereas in cohen s syllogism it refers to the sample .fisher s disjunction looks like this when put into the form of a syllogism : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ extreme p - values from random samples are rare under the null hypothesis . + an extreme p - value has been observed .+ ( therefore , either a rare event has occurred or the null hypothesis is false . ) + therefore , the null hypothesis is probably false . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ there is nothing wrong with that , although the line in parentheses is not logically necessary .when cohen s syllogism is altered to refer to the population , it also is true : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ members of congress are rare in the population of americans .+ this person is a member of congress .+ ( therefore , either a rare event has occurred or this person is not a random sample from the population of americans . ) + therefore , this person is probably not a random sample from the population of americans . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ if a selected person turns out to be a member of congress then an unusual event has occurred , or the person is a member of a non - american population in which members of congress are more common , or the selection was not random . assuming that all members of the american congress are american there is no relevant non - american populationfrom which the person might have been randomly selected , so the observation casts doubt on the random selection aspect .cohen is incorrect in his assertion that the fisher s disjunction lacks logical integrity .( it is worth noting , parenthetically , that cohen s paper contains many criticisms of null hypothesis testing that refer to problems arising from the use of what he describes as `` mechanical dichotomous decisions around the sacred .05 criterion '' .he is correct in that , but the criticisms do not directly apply to p - values used as indices of evidence . ) the notion that a p - value depends on data that have not been observed is an interesting idea that comes from the fact that p - values are tail areas of the sampling distribution .they sum the probability of observations _ at least as extreme _ as that observed those exactly as extreme as the observation and those more extreme .the more extreme observations are the observations that have not been observed .a widely quoted passage by jeffreys says : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ what the use of p implies , therefore , is that a hypothesis that may be true may be rejected because it has not predicted observable results that have not occurred .that seems a remarkable procedure ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ such descriptions of this issue might lead to alarm , but a p - value depends on unobserved data only because the p - value expresses the extremeness of the observed data . any useful index of extremenesshas to refer in some manner to the range of the population .for instance , a friend of the author , g , is taller than most men .let s say that he is at the 99th percentile of heights for adult men in australia .presumably no - one is concerned that the statements about g reference unobserved data . however , when i re - state the information in a probabilistic manner it becomes clear that it is dependent on unobserved data in exactly the same way as p - values are : the probability that a randomly selected man from australia is taller than g is one percent ; the probability that a random sample yields a p - value smaller than 0.01 when the null hypothesis is true is one percent .that type of dependency on unobserved data does nt make the claims false or in some special way misleading , just that they are on relative scales .p - values are quantitative claims of extremeness of data on a relative scale .the fact that p - values are derived from tail areas of the sampling distribution in no way disqualifies them as indices of evidence .an interesting point often raised in arguments against the use of hypothesis tests and significance tests for scientific investigations is that null hypotheses are usually known to be false before an experiment is conducted . when defined as the probability under the null hypothesis of obtaining data at least as extreme as those observed, p - values would seem to be susceptible to the criticism in that they measure the discordance between the data and something that is known to be false .the argument may have some relevance to hypothesis tests , but it is irrelevant to any use of p - values in estimation and in the assessment of evidence because the null hypothesis serves as little more than an anchor for the calculation a landmark in parameter space , as was discussed in section [ psamplesize ] .in contrast to hypothesis tests , significance tests and their p - values are immune to arguments that they lack utility when null hypotheses are routinely false .there is an idea that p - values overstate the evidence against the null hypothesis that seems to come from several related issues .first , it is claimed that because p - values are [ often ] misconstrued as being an error rate they [ often ] overstate the true evidence against the null hypothesis [ e.g. 14 , 5 ] .the solution to that problem is obvious , and it does nt involve discarding p - values ! a second rationale for saying that p - values overstate evidence seems to be based on the non - linear relationship between p - values and likelihoods or bayes factors [ 54 , 55 ] .the evidential meaning of a p - value is richer than just the maximal height of the likelihood function , as discussed in section [ psamplesize ] .however , even ignoring the other dimension(s ) of evidence , in order for non - linearity to lead to some overestimation of the strength evidence it is necessary that the evidential strength _ as perceived by the researcher _ be distorted or invalidated by that non - linearity .questions about whether researchers have a faulty perception of the relationship between p - values and their evidential meaning would seem to be accessible to empirical investigation , and the results of such a study would trump any theoretical argument . a third basis for arguing that p - values overstate the strength of evidence is the fact that two - tailed p - values are consistently smaller than the posterior probability of the null hypothesis derived from a bayesian analysis , often by an order of magnitude . that particular fact led to declaration of `` the irreconcilability of p values and evidence '' [ 2 ] . an easy response to this argumentis that p - values do not measure the same thing as bayesian posterior probabilities , and so any wish that they be numerically equivalent is misguided .moreover , as others point out [ 56 ] , the large discrepancy between p - values and the bayesian posteriors in that work was mostly a consequence of placing a large fraction of the prior probability on the null using a ` spike and slab ' prior .one - sided null hypotheses do not lend themselves to such spiking with prior probability because the null and non - null hypotheses have equivalent ranges of possible values and , as shown in another paper in the same issue of that journal , it turns out that one - tailed p - values are much closer in value to bayesian posterior probabilities [ 57 ] .thus , even if that argument were a reason to doubt the value of p - values as evidence , it would apply strongly only to two - tailed p - values . even in that caseit does not seem to be problematical because the one - to - one relationship between p and likelihood functions means that a bayesian analysis has to agree with the p - values in every case except where the prior probability distribution has substantial weight distant from the observed outcome after all , the posterior distribution is just a scaled product of the prior and the likelihood function .it is true to say that a large enough sample will always yield a small p - value , but only when the null hypothesis is false , and in that case a small p - value that calls the null hypothesis into doubt is a _ good _ thing .hurlbert & lombardi call this criticism of p - values the `` fallacy of the obese '' and say : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the demon of the overlarge sample : it lurks quietly in the darkness , waiting for researchers to pass by who are too focused on obtaining adequate sample sizes .if sample sizes are too large , one may be `` in danger '' of getting very low p values and establishing the sign and magnitude of even small effects with too much confidence .oh , the horror of it all ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ some might feel that that response is unfair to the argument about large samples because the argument is not about correctly identifying null hypotheses as true or false . instead, it is about the utility of identifying a null hypothesis as false when it is nearly true , when it is false by a trivially small amount .if the results of experiments were presented only as a p - value then the criticism might apply , but for many reasons results should never be presented only as a p - value , and such a deficient presentation should be quite rare. a small p - value can be potentially misleading in cases where the null hypothesis is false only in the absence of information regarding the observed effect size and sample size .the likelihood principle says that if two experiments yield proportional likelihood functions then they support the same inferences about the hypotheses .conflict between the likelihood principle and the frequency or repeated sampling principle is well understood , but given that p - values index likelihood functions it is difficult to see how they could conflict with the likelihood principle in a deep or inevitable way .in fact that alleged conflict is not intrinsic to p - values at all , but comes from ` corrections ' to p - values that force the p - values to conform to the frequency principle , as can be inferred from the usual illustration of the conflict , which involves a tale of two statisticians testing a coin for bias .imagine that two frequentist statisticians a and b collaborate on an experiment but have not negotiated stopping rules in advance presumably each thinks the other is a reasonable man who would have the same stopping rules in mind .however the unstated intention of a is to toss the coin six times and count the number of heads ( a standard fixed sample size design ) and b intends to toss the coin repeatedly until a heads comes up ( a sequential design that is often called negative binomial sampling ) .the experimental result was five tails in a row followed by one head and so the stopping rules of each statistician are simultaneously satisfied .statistician a calculates a p - value of 0.03 using the conventional formula for binomial sampling where is the probability of the coin turning up heads under the null hypothesis .statistician b calculated the p - value of 0.11 from the probability of needing 5 or more throws before the first heads using the formula for ` negative ' binomial sampling statistician a claims a ` significant ' result because p but statistician b claims that result is ` not significant ' .their behavioural inferences differ , and a fight ensues . that example is supposed to illustrate a conflict between p - values and the likelihood principle like this : the one dataset yielded two different inferences via the p - values , but the probabilities of the observed result assumed by the two statisticians are proportional for statistician a and for statistician b thus the likelihood functions are proportional and , according to the likelihood principle , the statisticians should both make the same inference .ergo , conflict .however , it is important to note that while the conflict is real , it is conflict between neyman pearsonian hypothesis testing and the likelihood principle .both statisticians in the example calculated and used their p - values as if they were long term error rates rather than as indices of evidence .the difference between equations [ binomexpt ] and [ negbinomexpt ] is that the latter assumes that the experiment had a sequential design and adjusts the calculation of the p - value to take the consequences of that design on error rates into account , but , as was shown in section [ sec : stopping_rules ] , where p - values are used as indices of evidence then it is necessary not to adjust them for the sequential design of the experiment .inference from the unadjusted p - values might differ from the behavioural inference that complies with the frequency principle , but inference from the unadjusted p - values via the likelihood functions that they index can be completely compatible with the likelihood principle .the apparent conflict between frequentism and the likelihood principle comes not from the frequentist conception of probability or from the nature of p - values , but from the conflict between experimental evidence and the error - decision framework of neyman and pearson .by examining the largely unremarked relationship between p - values and likelihood functions this paper shows how p - values can and should be used in evidence - based inference .the results come from simply documenting the emergent properties of p - values rather than from consideration of presupposed definitions , and that unusual approach allows the true nature of p - values to be discerned without interference by preconceptions regarding what p - values might be .the results show that , despite claims to the contrary , p - values do summarize the evidence in experimental data , but they summarise by indexing a likelihood function rather than by their numerical value .thus inevitably the likelihood function provides a more complete depiction of that evidence than the p - value , not only because it shows the strength of evidence on an easily understood scale , but also because it shows how that evidence relates to a continuous range of parameter values .however , for the those understandings to become embedded into the scientific community ( and , dare i say it , the statistical community ) it will be necessary for likelihood functions themselves to be more widely understood . as edwards pointed out [ 58 ] , users will readily gain a feel for the practical meaning of likelihood functions , and thus p - values , simply by using them .hacking succinctly paraphrased edwards : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ if you use likelihoods in reporting experimental work , you will get to understand them just as well as you now think you understand probabilities . [59 ] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ full likelihood functions give a more complete picture of the evidential meaning of experimental results than do p - values , so they are a superior tool for viewing and interpreting those results . however , it is sensible to make a distinction between the processes of drawing conclusions from experiments and displaying the results . for the latter ,it is probably unnecessary and undesirable for a likelihood function be included every time a p - value might otherwise be specified in research papers .to do so would often lead to clutter and would waste space because , given knowledge of sample size and test type , a single p - value corresponds to a single likelihood function and thus stands as an unambiguous index .however , during deliberative interpretation of data and while forming scientific inference it would be sensible for an experimenter to view the likelihood functions along with the p - values .software packages used for statistical analysis should therefore offer , by default , a likelihood function as part of their report on the result of tests of significance . of course , for such a change to make sense it will be necessary that textbooks for introductory courses on statistics be revised both to introduce likelihood functions and to make clear the distinction between fisherian significance tests and neyman pearsonian hypothesis test .the approach used in this paper to show the interrelatedness of p - values and likelihood functions appears to have some pedagogic utility , and could serve as a useful model for textbook authors to follow . in particular , the determination of likelihood functions using both p - values and the cumulative power curves should allow presentation of several important concepts in an integrated manner .the p - value from a significance test has properties consistent with it being a useful tool for scientific inference .its widespread use for such purposes is therefore neither an unfortunate accident of history , nor is it a consequence of institutionalization of a faulty approach , despite such claims in the literature .the conventional usage of p - values is not free from the need for reform , however , but efforts by statisticians to improve the manner in which scientific experiments are analyzed and interpreted should be directed at increasing explicit consideration of the evidence provided by the experimental data .presentation of the statistical analyses of experimental results should always be accompanied by what abelson called ` principled argument ' [ 47 ] and ideally by replication of the key experiments . both of those are better supported by evidential assessment of the experimental results than by error rates .hubbard r , lindsay rm ( 2008 ) why p values are not a useful measure of evidence in statistical significance testing . theory & psychology 18:6988 .berger j , sellke t ( 1987 ) testing a point null hypothesis : the irreconcilability of p values and evidence .journal of the american statistical association 82:112122 .3 . cumming g ( 2008 ) replication and p intervals : p values predict the future only vaguely , but confidence intervals do much better .perspectives on psychological science 3:286300 .4 . johansson t ( 2010 ) hail the impossible : p - values , evidence , and likelihood .scandinavian journal of psychology 52:113125 .hubbard r , bayarri mj ( 2003 ) confusion over measures of evidence ( p s ) versus errors ( s ) in classical statistical testing .the american statistician 57:171178 . 6 .schervish mj ( 1996 )p values : what they are and what they are not . the american statistician 50:203206 .gill j ( 1999 ) the insignificance of null hypothesis significance testing . political research quarterly 52:647674 .lambdin c ( 2012 ) significance tests as sorcery : science is empirical significance tests are not . theory & psychology 22:6790lecoutre b , lecoutre m - p , poitevineau j ( 2001 ) uses , abuses and misuses of significance tests in the scientific community : wo nt the bayesian choice be unavoidable ?international statistical review / revue internationale de statistique 69:399417 . 10 .lecoutre m - p , poitevineau j , lecoutre b ( 2003 ) even statisticians are not immune to misinterpretations of null hypothesis significance tests .international journal of psychology 38:3745 . 11 .lew mj ( 2012 ) bad statistical practice in pharmacology ( and other basic biomedical disciplines ) : you probably do nt know p.british journal of pharmacology 166:15591567 .wagenmakers e - j ( 2007 ) a practical solution to the pervasive problems of p values .psychonomic bulletin & review 14:779804 .jaynes et ( 1980 ) what is the question ? , in _ bayesian statistics _ , eds bernardo jm , degroot mh , lindley dv , smith afm ( valencia university press , valencia ) , 14 .berger j ( 2003 ) could fisher , jeffreys and neyman have agreed on testing ?statistical science 18:112 .schweder t , norberg r ( 1988 ) a significance version of the basic neyman pearson theory for scientific hypothesis testing .scandinavian journal of statistics 15:225242 . 16 .mayo dg , spanos a ( 2011 ) error statistics , in philosophy of statistics , eds bandyopadhyay ps & forster mr .elsevier , pp 146 .gigerenzer g ( 1992 ) the superego , the ego , and the i d in statistical reasoning , in _ a handbook for data analysis in the behavioral sciences : methodological issues _, eds gideon k , lewis c ( l. erlbaum associates , hillsdale , nj ) , pp 311339 . 18 .gigerenzer g ( 1998 ) we need statistical thinking , not statistical rituals .behavioral and brain sciences 21:199200 .hurlbert sh , lombardi cm ( 2009 ) final collapse of the neyman pearson decision theoretic framework and rise of the neofisherian .annales zoologici fennici 46:311349 . 20 .oakes m ( 1986 ) statistical inference ( wiley , chichester ; new york ) .haller h , krauss s ( 2002 ) misinterpretations of significance : a problem students share with their teachers .methods of psychological research 7:120 . 22 .halpin pf , stam hj ( 2006 ) inductive inference or inductive behavior : fisher and neyman pearson approaches to statistical testing in psychological research ( 1940 - 1960 ) . the american journal of psychology 119:625653 .huberty cj ( 1993 ) historical origins of statistical testing practices : the treatment of fisher versus neyman pearson views in textbooks . the journal of experimental educational 61:317333 . 24 .fisher ra ( 1966 ) the design of experiments ( london oliver & boyd , edinburgh ) . 25 .neyman j , pearson e ( 1933 ) on the problem of the most efficient tests of statistical hypotheses .philosophical transactions of the royal society of london series a 231:289337 . 26 .neyman j ( 1957 ) `` inductive behavior '' as a basic concept of philosophy of science .revue de linstitut international de statistique 25:722 . 27 .strasak a , zaman q , marinell g , pfeiffer k ( 2007 ) the use of statistics in medical research : a comparison of the new england journal of medicine and nature medicine .the american statistician 61:4755 . 28 . hacking i ( 1965 )logic of statistical inference .( cambridge university press , cambridge ) . 29 .royall r ( 1997 ) statistical evidence : a likelihood paradigm ( chapman & hall / crc ) 30 .edwards awf ( 1992 ) likelihood ( the johns hopkins university press , baltimore , md )sackrowitz h , samuel - cahn e ( 1999 ) p values as random variables expected p values .american statistician 53 : 326331 .murdoch dj , tsai y - l , adcock j ( 2008 ) p - values are random variables .the american statistician 62:242245 .bhattacharya b , habtzghi d ( 2002 ) median of the p - value under the alternative hypothesis .the american statistician 56:202206 .hoenig j , heisey d ( 2001 ) the abuse of power : the pervasive fallacy of power calculations for data analysis . the american statistician 55 : 16 35 .royall r ( 2000 ) on the probability of observing misleading statistical evidence .journal of the american statistical association 95:760768 .lombardi cm , hurlbert sh ( 2009 ) misprescription and misuse of one - tailed tests .austral ecology 34:447468 . 37 .bland jm , bland dg ( 1994 ) statistics notes : one and two sided tests of significance .bmj 309:248248 .kaiser hf ( 1960 ) directional statistical decisions .psychological review 67:160167 .rice wr , gaines sd ( 1994 ) ` heads i win , tails you lose ' : testing directional alternative hypotheses in ecological and evolutionary research .trends in ecology & evolution 9:235237 .dubey sd ( 1991 ) some thoughts on the one - sided and two - sided tests .journal of biopharmaceutical statistics 1:139150 .ringwalt c , paschall mj , gorman d , derzon j , kinlaw a ( 2011 ) the use of one- versus two - tailed tests to evaluate prevention programs .evaluation and the health professions 34:135150 .eysenck hj ( 1960 ) the concept of statistical significance and the controversy about one - tailed tests .psychological review 67:269271 . 43 . student ( 1908 ) the probable error of a mean .biometrika 6:125 .freedman ls ( 2008 ) an analysis of the controversy over classical one - sided tests .clinical trials 5:635640 .burke cj ( 1953 ) a brief note on one - tailed tests .psychological bulletin 50:384 .gigerenzer g ( 2004 ) mindless statistics .journal of socio - economics 33:587606 .abelson rp ( 1995 ) statistics as principled argument ( taylor & francis , hillsdale , nj ) .royall rm ( 1986 ) the effect of sample size on the meaning of significance tests .the american statistician 40:313315 .fisher r ( 1955 ) statistical methods and scientific induction .journal of the royal statistical society series b ( methodological ) 17:6978 .royall r ( 2004 ) the likelihood paradigm for statistical evidence , in _ the nature of scientific evidence : statistical , philosophical and empirical considerations _ , eds taper ml & lele sr ( university of chicago press , pp 119152 .cohen j ( 1994 ) the earth is round ( p < .05 ) .american psychologist 49:9971003 . 52 .hagen rl ( 1997 ) in praise of the null hypothesis statistical test .american psychologist 52:1524 .jeffreys h ( 1961 ) theory of probability ( oxford university press , oxford ) .goodman sn ( 1993 ) p values , hypothesis tests , and likelihood : implications for epidemiology of a neglected historical debate .american journal of epidemiology 137:485496 . 55 .( 2001 ) of p - values and bayes : a modest proposal .epidemiology 12:295297 .vardeman sb ( 1987 ) testing a point null hypothesis : the irreconcilability of p values and evidence : comment .journal of the american statistical association 82:130131 .casella g , berger rl ( 1987 ) reconciling bayesian and frequentist evidence in the one - sided testing problem .journal of the american statistical association 82:106111 .edwards afw ( 1972 ) likelihood : an account of the statistical concept of likelihood and its application to scientific inference ( cambridge university press , cambridge ) .hacking i ( 1972 ) likelihood .british journal for the philosophy of science 23:132137 .r code for the likelihood functions based on p - values : .... likefromstudentstp<-function(n , x , pobs , test.type ) { # test.type can be ' one.sample ' , ' two.sample ' or ' paired ' # n is the sample size ( per group for test.type = ' two.sample ' ) # pobs is the observed p - value # h is a small number used in the trivial differentiation h<-10 ^ -7 powerdn<-power.t.test('n'=n , ' delta'=x , ' sd'=1 , ' sig.level ' = pobs - h , ' type'= test.type , ' alternative'='one.sided ' ) powerup<-power.t.test('n'=n , ' delta'=deltaonsigma , ' sd'=1 , ' sig.level ' = pobs+h , ' type'= test.type , ' alternative'='one.sided ' ) powerslope<-(poweruppower)/(h*2 ) l<-powerslope } # cumulative distribution pcvscum < - function(x ) { y<-power.t.test('n'=n , ' delta'=deltaonsigma , ' sd'=1 , ' sig.level'=x , ' type'=test.type , ' alternative'=tails ) y$power } # probability density distribution pcvs <-function(x ) { grad(pcvscum , x ) # function in numderiv package } # run example deltaonsigma < - 0.5 n<-10 test.type<-'two.sample ' tails<-'one.sided ' x1<-0.01*c(1:99 ) x< - c(0.001,x1,0.999 ) y<-pcvs(x ) plot(x , y ) ....
the customary use of p - values in scientific research has been attacked as being ill - conceived , and the utility of p - values has been derided . this paper reviews common misconceptions about p - values and their alleged deficits as indices of experimental evidence and , using an empirical exploration of the properties of p - values , documents the intimate relationship between p - values and likelihood functions . it is shown that p - values quantify experimental evidence not by their numerical value , but through the likelihood functions that they index . many arguments against the utility of p - values are refuted and the conclusion is drawn that p - values are useful indices of experimental evidence . the widespread use of p - values in scientific research is well justified by the actual properties of p - values , but those properties need to be more widely understood . + key words : p - value ; significance test ; likelihood ; likelihood function ; evidence ; inductive inference ; statistics reform .
in the 21 century we see the advent of true quantum technologies such as quantum computing and quantum metrology which are expected to outperform any conventional approach .these fields are rapidly evolving and have created the demand for strategies to govern individual quantum systems .thus , the theory of control of quantum systems has gained tremendous interest recently and is already a huge field .parallel to the theoretical advances , the experimental side has also seen a blast , especially due to the swift progress in intense and ultrashort laser pulse generation which opens up the prospect to observe and manipulate properties of single molecules , solid - state systems or atomic - scale phenomena in real - time .there have been numerous paradigms developed for different tasks and control objectives .one of the early ones is open - loop control which relies on the knowledge of the initial quantum system and a well - defined control objective to design control fields without considering feedback from measurements .this can be done coherently , i.e. we use these control fields in a way that does not destroy quantum coherence which was utilized for problems e.g. in quantum chemistry . in coherent control in general, the control operations consist of unitary transformations . however , some quantum systems may not be controllable using only coherent controls . for such uncontrollable quantum systems , it may be possible to enhance the capabilities of quantum control by introducing new control strategies where one is allowed to destroy coherence of the quantum systems during the control process ( incoherent control ) .optimal control techniques such as gradient - free convex optimization can also boost the convergence properties and efficiency of open - loop control designs .although open - loop strategies have achieved theoretically and practically significant success , they are quite limited in scope .it was natural to extend the studies to closed - loop control which has been investigated in depth in classical control theory and shown to be superior in many ways , most notably in reliability and robustness .these are essential in quantum control because any practical quantum technology - a quantum computer , for instance - has to be robust in the presence of noise or uncertainty . in closed - loop control ,the state information is used in shaping the control mechanism .we may split this paradigm into two categories ( although other categorizations also exist ) : adaptive learning control and quantum feedback control ( qfc ) . in the former case we havea closed - loop operation and each cycle is applied on a new sample .this procedure has gained great success where multiple samples are available , e.g. controlling molecules in an ensemble with lasers .the other concept , quantum feedback control ( qfc ) includes direct or indirect measurements on the state to gain information which can be fed back to achieve the desired performance .classical feedback control is well - understood and has tremendous advantages because - in principle - the measurement back - action can be neglected classically , i.e. we can acquire full information without disturbing the system . due to the intrinsically different nature of quantum mechanics , most importantly the well - known phenomena of quantum state collapse and the unnegligable measurement back - action , qfc faces a great number of challenges .nevertheless , much has been done since the first recognition ( see e.g. ) of the importance of this paradigm and promising results have been obtained .the paper is organized as follows . in section [ sec : formalism ] ., we overview the most important elements of the general framework needed to formulate problems in quantum feedback control such as the markovian master equation , weak and continuous measurements , stochastic schrdinger and master equations .this can be useful for people who are not familiar with the formalism of this field and also helps the survey to be self - consistent , self - contained and to avoid ambiguities in notation . in section [ sec : gendesc ] ., we give a general description of qfc and in section [ sec : cohmodell ] ., we set up a coherent control model and outline some theorems regarding its capability .furthermore , in a generalization of this setup , it is convenient to compare the results with open - loop control . in section [ sec : quantumchaos ] ., we briefly introduce how quantum feedback can help us to understand quantum chaos better which has found to be essential in understanding the transition from quantum to classical . in the following section we review some of the tasks which are proven to be efficiently achievable such as quantum error correction , rapid state preparation and purification , entanglement generation .we start with a simple example : feedback control of a single qubit in a discrete - time setting .this clearly illustrates the central concepts .then we move on to a similar task but with a continuous measurements ; at the end of this section , we also make some remarks on recently emerged questions ( arbitrary large systems , feedback delay problem ) .entanglement generation is a novel example where quantum feedback is useful which is described in section [ sec : entgen ] .a projective measurement - based feedback scheme is described in section [ sec : projqubit ] which connects chaos and quantum feedback control from a different perspective .this scheme can be used for several purposes such as state purification or enhancing entanglement ( which has been proven for a two - qubit case [ sajat ] ) .we present reproduced simulations in some of the cases ( section [ sec : projqubit ] ) .in quantum control the systems we want to control are quantum systems , thus described by the general framework of quantum mechanics . for closed systemsthe state is described by a unit vector in a hilbert space ( * state space postulate ) .the time - evolution of the state of a closed quantum system is described by a unitary operator ( * evolution postulate ) .when two physical systems are treated as one combined system , the state space of the combined physical system is the tensor product space of the state spaces of the component subsystems ( * composition of systems postulate ) .quantum measurements are described by a set of measurement operators which act on the state space of the system being measured and satisfy ( the index refers to the measurement outcomes ) . if the state of the system immediately before the measurement is then the probability that we measure is and the measurement leaves the system in ( * measurement postulate ) . * * * * in many situations we only know a probability distribution about the states in which the system can be , i.e. ( the system is in a _ mixed state ) . in this case , it is convenient to introduce the density operator : . from construction we know that is hermitian , positive and . also , if and only if the system is in a pure state and under a unitary transformation transforms as . _let us introduce now another class of measurements which provide only partial information about an observable .we can do this if we choose our measurement operators to be a weighted sum of projectors ( we will denote their eigenstates as ) , each on peaked about a different value of the observable , i.e. where is the normalization factor chosen so that satisfy the completeness relation and we assumed that the eigenvalues of the observable n are .as an example , if apply this measurement to a completely mixed state ( so ) and obtain the result , the post - measurement state is from which we see that the final state is peaked about the eigenvalue but has a finite width given by .measurements for which is large are called _ strong measurements and for which is small are called _ weak measurements . __ the above postulates might be enough to treat closed quantum systems ; however , in most practical situations we have to deal with open systems ( for a deep introduction we refer to ) .on one hand , this is due to the fact that any realistic system is subjected to a coupling to a second system ( we will use the term _ environment or _ bath for the second system if it is much larger than the first ) in an uncontrollable and non - negligable way . on the other hand , even if one can provide and solve a microscopic description for the combined system , most of the results would be irrelevant .open systems are also important when we want to monitor ( i.e. continuously measure ) a system .both cases are essential in quantum feedback control . _ _we start with a closed system .the schrdinger equation is where is the hamiltonian of the system and we have set to .it is easy to see that for a mixed state ( [ eq : schrodinger ] ) implies \ ] ] which is called the von neumann or liouville - von neumann equation .this can be written in the form where one can easily notice the analogy with the classical liouville equation . here, is the liouville super - operator ( it is a super - operator , since it maps operators to operators ) .we will drop the hats from the operators from now on .note , that if we work in the interaction picture , ( [ eq : neumanneq ] ) and ( [ eq : neumannliouville ] ) still hold for the interaction density matrix and interaction hamiltonian ( which we will denote with a subscript ) . we can write ( [ eq : neumanneq ] ) in an integral form as dt'}\ ] ] we can use this to solve ( [ eq : neumanneq ] ) perturbatively .formally , it is easy to generalize ( [ eq : neumanneq ] ) to open systems .consider the case when the system is in a bath ( thus , the hilbert space of the total system is ) .the hamiltonian of the total system can be written .the density matrix of the system can be obtained by tracing out the bath from the density matrix describing the total system ( s+b ) : .so ( [ eq : neumanneq ] ) takes the form \ ] ] however , in general the dynamics of can be rather involved and we have to make assumptions to proceed .let us assume that at the system is uncoupled from the environment , i.e. where represents some reference state of the bath .then the transformation from to some of can be written as .we introduced which is a map from the system space to itself and is called a _ dynamical map .it can be shown that these maps represent convex - linear , completely positive and trace - preserving quantum operations .if we neglect the memory effects in the reduced system dynamics ( justified later ) we can show that they also form a semigroup . under some mathematical conditions there exists a linear map ( let us call it ) which is the generator of the semigroup , so we can write . from this we rewrite ( [ eq : neumannliouville ] ) as which is called the _ markovian master equation .it was shown by lindblad in 1976 that the most general form of ( so that a solution is always a valid density matrix ) is ( assuming that the dimension of the hilbert space of the total system is ) + \sum_{k=1}^{n^2 - 1 } \gamma_k \left(l_k \rho_s l_k^{\dag } - \dfrac{1}{2 } l_k^{\dag}l_k \rho_s - \dfrac{1}{2}\rho_s l_k^{\dag}l_k \right)\ ] ] where the quantities are non - negative ( and can be shown that physically they play the role of relaxation rate for the different decay modes of the system ) , the operators are arbitrary operators , called the _ lindblad operators , satisfying that is bounded ( although this condition is usually ignored ) .the first term represents the unitary part of the dynamics and the second term is the dissipative part ( often denoted as ] is the dissipation superoperator , \rho = c\rho + \rho c^{\dagger } - { \langle c + c^{\dagger } \rangle}\rho ] is the dissipative superoperator , just as defined in ( [ eq : generalsme ] ) .now we introduce our measurement and feedback procedure .homodyne detection is a powerful technique in practical detection of light beams .this involves another strong , coherent signal ( called the local oscillator which , in the homodyne case , has the same frequency as that of the detected signal ) which is mixed with the original signal .this can help to eliminate the initial fluctuations of the laser and allows us to detect a quadrature of the system , e.g. after the field has interacted with the spins ; thus , we observe the photocurrent where is the evolution of the whole system .it is useful to introduce the integral form of , the integrated photocurrent which is then . using thiswe have to solve the quantum filtering problem .that is , given an atomic observable , we want to find the best estimate of given the prior observations , formally ] is the detection efficiency , is the effective interaction strength which is a function of and the drive amplitude ( this quantity manifests itself in the hamiltonian of the interaction of the cavity ) and , as a reminder , \rho=\rho = f_z\rho + \rho f_z^{\dagger } - { \langle f_z + f_z^{\dagger } \rangle}\rho ] it is not possible to prepare any desired eigenstate of from any mixed initial state regardless how we choose .this can be proved in a straighforward fashion by proving that so we can not increase the purity of the state with ( [ eq : generalolcforanalysis ] ) . in the olc case with \neq 0 ] .the proof of this is rather technical and can be found in ( the proof is also based on the results in ) .we can conclude that if we choose the measurement channel appropriately , we can reach every eigenstate of asymptotically with the mfc model , while from theorem [ th : olclimit ] .we saw that in the olc model it is not the case . in this sense ,the measurement - based feedback control is superior to open - loop control in the generalized model as well ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` anyone who uses words the ' ' quantum `` and ' ' chaos `` in the same sentence should be hung by his thumbs on a tree in the park behind the niels bohr institute . ''joseph ford _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the unravelling of the connection between chaos and quantum mechanics - despite the great interest it had gained - has been proven to be a puzzling question and was a subject to heated debates among physicists and mathematicians . as a result , there is an extensive literature on it ( for a good summary , see e.g. ) and the review of it would greatly exceed the framework if this paper and would not be relevant . rather , we focus on the recent development of the topic which shows that continuous measurements and feedback plays an important role in understanding the subject . the first studies related to classical chaos date back as long as the end of the 19th century ( poincar , 1892 ) and chaotic systems can be characterized by their exponential sensitivity to the initial conditions .this sensitivity can be measured by the lyapunov exponent which yields the asymptotic rate of exponential divergence of two trajectories which start from neighbouring points in the phase space .if the ( maximal ) lyapunov exponent is positive for a system , it is said to be chaotic . in closed quantum systems , however , the time evolution is unitary which does not allow the exponential divergence of the trajectories ( in hilbert space ) . in other words ,the evolution of a closed quantum system is necessarily quasiperiodic ( this can be directly shown from the quantum liouville equation , for example ) .quantum chaos traditionally meant the study of quantized versions of classical chaotic systems .the paradox is apparent : how can classical mechanics emerge from quantum mechanics in an appropriate macroscopic limit if the former manifestly exhibits the above mentioned property but the latter does not ?one - and the dominant - way to resolve this is the observation of the fact that every experimental setup involves measurement , therefore open quantum systems .it was first noted in by lloyd and slotine that the nonlinear dynamics induced by weak quantum feedback could be used to create a novel form of quantum chaos . in this context, weak quantum feedback means that one performs a collective measurement on a large number of identical systems , thus obtains the average value of an observable while only slightly disturbing the individual systems , and then feed back this information .this form of weak measurement can be realized in nmr for example , where it is possible to monitor the induction field produced by a large number of precessing spins which gives the average value of their magnetization along a given axis but only slightly disturbs the spins . in a general picture ,it formally means the following .suppose we have identical , noninteracting quantum systems , each characterized by .using for example the povm described in section [ subsec : measurements ] . , it is possible to perform a measurement on which determines the single - system reduced density matrix to some degree of accuracy while disturbing it by with the property that if then . nowif we feed back this information ( i.e. apply to each system a unitary transformation ) then the single - system density operator will be governed by the equation where it is crucial to note that can be any ( possibly nonlinear ) function of .taking the continuous limit this immediately leads to \ ] ] where is the hamiltonian corresponding to .the possible forms of and nonlinear quantum transformations in general were analized in . besides other applications , such as , for example , creating schrdinger s cats , i.e. quantum systems that exist in superpositions of two quasiclassical states , systems that obey nonlinear equations could be used to create _ true quantum chaos ; this is because the sse need not preserve the distances of the trajectories . _a further analysis on how chaos can emerge from the sme even far from the classical limit was done in by habib et al .they choose the observable to be the the position operator , thus the sme takes the form of ( [ eq : smem ] ) with and consider the duffing oscillator ( single particle in a double - well potential , with sinusoidal driving ) which has a hamiltonian where is the momentum operator , are parameters which determine the potential and the strength of the driving force .let us label the possible realizations of the noise process by . introduce the divergence between a fiducial trajectory and another ( `` shadow '' ) trajectory infinitesimally close to it : .we can now define the observatinally relevant lyapunov exponent by which is reasonable because we are interested in the sensitivity of the system to changes in the initial conditions and not in the changes in the noise realizations so we keep that fixed . by simulations using paralel supercomputers they find that , after a behaviour , converges to a positive , finite value and this value is greater if the measurement strength is greater . from thesewe can conclude that there exists a purely quantum regime which evolves chaotically with a positive , finite lyapunov exponent . in section [ sec : projqubit ] .another kind of chaos is introduced which emerges from the conditional dynamics of qubits using a specific , selective protocol which can be also useful to perform control tasks .we see chaotic behaviour directly in the hilbert - space and also in the entanglement in the multiqubit case . at the end of that sectionone can find a summary of feedback - induced chaos . in the following , we review some tasks which can be impleneted by qfc and - where relevant - compare their efficiency to other schemes .consider the following task : we prepare a qubit in one of two non - orthogonal states and with overlap , for example where . in the bloch representation means that the two states lie in the plane rotated by about the z - axis .consider the following noise : =p(\sigma_z \rho \sigma_z ) + ( 1-p)\rho\ ] ] where is the well - known pauli operator and .this is called a dephasing noise : with probability it applies the phase flip and with probability it leaves the system unaltered .the dephasing noise has an effect of decreasing the -component of the bloch vector .the task of stabilizing a qubit against this noise was considered in and has recently been investigated experimentally in with a photonic polarization qubit .we want to find the quantum operation which corrects the state after the noise has been applied and maximizes the average fidelity between the input state and the corrected state , i.e. \arrowvert \psi_i \rangle}\right]\ ] ] where has to be a cptp ( completely positive and trace - preserving ) map .now consider the first strategy : _`` do nothing '' .we will see that in some cases this trivial strategy can be quite efficient .if we calculate the sum in ( [ eq : max ] ) with being the identity , the average fidelity we get is which is plotted on figure [ fig : compare1].a ._ now consider another - based on a classical concept - strategy : `` discriminate and prepare '' .this means that we try to distinguish the outcoming state with a projective measurement and then prepare the state based on this result .this is a classical concept because we try to gain as much information from system as we can .the optimal projective measurement ( in terms of the average probability of success ) we can do succeeds with ( helstrom s measurement ) , independent of the noise strength .now we have to choose our states which will be prepared after the measurement .if we say that we prepare if the measurement result is , this yields an average fidelity of however , it can be shown that we can obtain a better average fidelity if we prepare the states i.e. we prepare if we measured and we prepare if we measured .this gives an average fidelity of which is plotted on figure [ fig : compare1].b .its optimality can be shown using convex optimazation . in some regions ( for example when is small )this scheme is outperformed by the `` do nothing '' scheme .note that this is also a feedback scheme as our choice of state preparation depends on the measurement result .we say it is classical , however , because the idea is based on a classical concept , i.e. acquire as much information about the system as possible .now we set up a feedback control scheme with weak , non - destructive measurements .first we define our measurement operators as where where ( the eigenstates of ) .this measurement can be implemented by using an ancillary qubit , a projective measurement and an entangling gate . is a parameter which describes the strength of the measurement : for the operator become the identity operators , for we get a projective measurement .once we performed the measrurement , we apply an other operation based on the result ( note that this is a feedback procedure ) : we choose the angle to be if we obtain 0 from the measurement and if we obtain 1 .it can be shown that the procedure is optimal if we choose \ ] ] with .so our correction operation altogether takes the form =(z_{+\eta } m_0)\rho ( z_{+\eta } m_0)^{\dagger } + ( z_{-\eta } m_1)\rho ( z_{-\eta } m_1)^{\dagger}\ ] ] and the average fidelity we get is which is plotted on figure [ fig : compare1].c .the scheme desires some interpretation .first note that the dephasing noise ( [ eq : noise ] ) can be viewed as a rotation of the bloch vector of the state by ( with equal probability and being determined by ) around the z - axis .our strategy is to have a measurement which determines the sign of ; also , we want to adjust its strength so we can vary the trade - off between information gain and back - action effect .then we apply a feedback : we choose this to be a unitary operation which rotates it back to the desired axis based on the measurement result .figure [ fig : compare2 ] . shows the difference between the quantum feedback scheme and the other - motivated by classical control - schemes in the average fidelities , i.e. it is apparent that so the quantum feedback scheme always outperforms all the other schemes .it can also be shown - using the same technique as in the previous case - that for this task our feedback procedure is optimal . )( `` do nothing '' scheme ) , ( [ eq : discprep2 ] ) ( `` discriminate and prepare '' scheme ) and ( [ eq : quantumperformance ] ) ( `` quantum feedback '' scheme).,width=642 ] the following problem is very similar to the previous : the purification of a qubit system in the fastest time .however , we can make use of what we have set up in section [ sec : formalism ] . and perform continuous measurements to speed up the rate of purification .there are several papers which are concerned about feedback control of two - state quantum systems , specifically this in .the continuous measurement will be performed on the -component of the spin-1/2 particle ( so represents the observable ) and we will use the bloch representation with bloch vector . with thesethe sme ( [ eq : stochasticschrodingerdensity ] ) in terms of the bloch components becomes from where we can see that - not surprisingly , as the measurement marks the -direction - the relation between and is a constant of motion ( the initial angle in the plane , is constant ) . defining we can reduce ( [ eq : blochevo ] ) to we define the impurity of the system as .this is in general not a good measure of mixedness .nonetheless , it has a simple analytical form and it is equal to the von neumann entropy in the limit of high purity .it is possible to obtain the evolution of from the sme using the linear quantum trajectory formulation ( this is an equivalent formulation of the sme in which the equations ( [ eq : blochevodelta ] ) become linear ) and the solution is which must be solved numerically .however , we can approximate this in the long time limit ( noting that if is large than the integral does not depend on and using the taylor expansion ) and we have and in the short time limit decays exponentially with rate .so if we do not consider quantum feedback , the only way to speed up the reduction of the impurity is to increase the measurement strength .the motivation to introduce a unitary operation during the measurement is the same as in section [ sec : stabqubitdiscrete ] .one can calculate that the evolution of the length squared of the bloch vector : and it is apparent that the best increase we can achieve is when so the bloch vector lies in the plane .so , if we apply another hamiltonian ( for a time period ) , which generates a rotation of the bloch vector by an angle towards or away from the plane while maintaining , we can increase our efficiency .we can note two important things .the first one is that it is a ( real - time ) feedback procedure as we adjust the extra hamiltonian at each depending on the measurement result .secondly , that to achieve the best efficiency , we must choose the angle such that it exactly cancels the stochastic evolution which kicks out the bloch vector from the plane ; this , however , leads to the choice of which may require an arbitrary large hamiltonian resource .it is possible to solve the equation of motion and we obtain a simple result from where it is easy to see if we choose a very small target impurity ( so is large ) , the time needed to achieve the target in the first case ( which we can call classical as it is based on a classical idea ) and in the quantum feedback case has a ratio of so it is possible to achieve a speed - up factor of 2 , in the limit when the measurement time is large compared to the measurement rate , with the help of the feedback scheme .this also allows us to perform state preparation : when the desired purity is achived , we apply a unitary on the system to rotate it to the desired target state .this is possible because we know that the bloch vector stayed unbiased with respect to the measurement basis .the speed - up factor is a theoretical upper bound and can be less if we put constraints on the hamiltonian .this latter case is qualitatively analyzed in .it was proved rigorously in that this is indeed an optimal feedback procedure for the task , using bellman equations and verification theorems .also , it has been proven in general that in the optimal feedback control regime , it is always preferable to choose the basis of the measurement not to commute with the system density matrix .the whole procedure was extended to the two - qubit case , where one is allowed to ( weakly ) measure only one of the qubits ( say the first ) .one might expect that the best way to purify the second qubit is to apply the optimal protocol to the first one ; this , however , is not true and was falsified by a counter - example .one can naturally ask the question : how does this speed - up change if the system is arbitrary in size ?this was considered in and it was proven that for an observable with distinct , equally spaced eigenvalues the scheme can boost the rate of purification by at least a factor of ( assuming again infinitely large hamiltonian resource ) .this generalized problem is significantly more involved than the qubit case and it is an open question whether the feedback procedure which achieves this performance is optimal or not .there is another important remark . in a realistic setupwe always have to consider the possible sources of delays which can affect the whole feedback loop .the total effective feedback delay is the sum of delays in the loop as the reciprocal detector bandwidth , the time needed to perform the filtering and control calculations , response time of the actuator ( e.g. laser ) and the electronic delays between the devices .the aforementioned feedback schemes can only work well if the dynamical timescale of the system is large compared to the effective feedback delay . despite some remarkable developments of the devices ( responsive lasers , electro - optic modulators ) , it is still not the case ; analyses the protocol when imperfections in the controls are introduced .they find that delays in the feedback loop have the most effect and for systems with slow dynamics , inefficient detection causes the biggest error .this was also the motivation for a recently proposed idea where the feedback procedure is replaced by an open - loop design together with a quantum filtering .the open - loop control is applied for some time ( which time period is significantly longer than the dynamical timescale of the system ) and the quantum filtering can run parallel or offline ( depending on the control objective ) .the scheme is proven to be comparable in efficiency in several tasks ( rapid measurement and purification , for instance ) and much less sensitive to the delays caused by the limits of technology ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` entanglement is iron to the classical world s bronze age . ''chuang _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ quantum entanglement is a central concept in quantum mechanics and has many applications in quantum information theory and quantum computation . with the aid of entanglement ,otherwise impossible tasks may be achieved , for example in quantum communication , and it is also believed to be vital to the functioning of a quantum computer . there has been a rapid development of devices that can produce entanglement which often rely on highly controlled interactions .these can be based on trapped ions ( see e.g. where the authors report the creation of greenberger - horne - zeilinger states with up to 14 qubits ) or spatial confinement of the photons with strong atom - field coupling in a cavity , for instance. applying feedback control schemes to this task was found to be useful in many cases .in fact , entanglement protection or generation is one of the most attractive applications of quantum feedback .there are a number of studies which have demonstrated that a feedback controller can effectively help the distribution of entanglement in a quantum network .mancini and wiseman showed that direct feedback can be used to enhance the correlation of two coupled bosonic modes .the optimal measurement turns out to be nonlocal homodyne measurement in this case .yanagisawa presented a deterministic scheme of entanglement generation at the single - photon level between spatially separated cavities using quantum non - demolition measurement and an estimation - based feedback controller .following these advances , in petersen et al .described a method to avoid entanglement sudden death in a quantum network with measurement - based feedback control .entanglement sudden death means that entanglement completely disappears in a finite time , in which case conventional techniques - e.g. entanglement distillation - can not assist .they consider a realistic scenario ( the quantum channel is in contact with the environment and the homodyne detector has a finite bandwith ) with a linear continuous - variable cavity model .the cavities are spatially separated and the interaction is simply mediated by an optical field , in contrast to , where the bosonic modes interact through an optical nonlinearity . here , we review the control of entanglement generation between two qubits using continuous weak measurements and local feedback in more detail . this was considered in and , as we will see , it is an application of jacobs protocol ( a quite tricky application , we might add ) described in the previous section , so its formalism fits well in the line ( see also for a more general discussion and useful introduction ) .note also that the two qubit itself is a fundamental element of highly entangled states : in quantum computing , all the ( unitary ) entangle operations on many spins can be implemented by compositions of those on the two qubit which is referred to as the universality of quantum circuits .consider the density matrix of the two qubits which evolves according to the sme given in ( [ eq : stochasticschrodingerdensity ] ) .specifically , we consider the observable to be , so the system evolves according to + \sqrt{2k}((\sigma_z \otimes \sigma_z ) \rho + \rho ( \sigma_z \otimes \sigma_z ) - 2{\langle \sigma_z \otimes \sigma_z \rangle}\rho)dw\ ] ] we want to quantify the entanglement of the system . for this , it is useful to expand the density operator in the pauli basis ( also called as the fano form ) . for two qubitsit takes the form and the coefficients can be found as . from the normalization conditionwe also know that .it is possible to quantify the entanglement between the qubits using . a pure bell - state ( maximally entangled state )has ; if then there is no classical correlation of the two qubits . for a product state and for a mixed state , increasing leads to an increase in both purity and entanglement . is also invariant under single qubit rotations .these equations provide the basis if one wishes to perform numerical simulations on this problem ; in the following , however , we only focus on the main ideas rather than technical calculations .let us introduce the concept of decoherence - free subspace ( dfs ) .a dfs is a subspace of the hilbert space of the system that is invariant to non - unitary dynamics , i.e. it remains unaffected by the interaction of the system and its environment .this was first introduced in a quantum information theory context as these subspaces prevent destructive environmental interactions by isolating quantum information . what are the conditions for the dfs to exist ?there are several possible formulations in which we can answer this question , e.g. in the hamiltonian formulation , operator - sum representation formulation or the semigroup formulation . herewe give a description using tha latter one ( as all the equations were already set up in the paper ) .consider the lindblad form of the markovian master equation given in ( [ eq : generalgenerator ] ) .the dissipative part determines whether the dynamics of a quantum system will be unitary or not ; in particular , when =0 $ ] , the dynamics will be decoherence - free .let span where is the hilbert space of the system . under the assumptions that the parameters are not fine tuned and there is no dependence on the initial conditions of the initial state of the system , a necessary and sufficient condition to bea dfs is that all basis states are degenerate eigenstates of the error generators ( lindbald operators ) . in our casewe have two decoherence - free subspaces , given by and which can be found by observing that the measurement operator has two degenerate eigenvalues .one can check that once is restricted to the dfs then according to the sme ( [ eq : smezz ] ) , so the measurement does not extract any useful information .note , however , that it is easy to rotate the system out of the dfs by applying hadamard gates locally to the qubits and this is an invertible operation .this means that it is possible to turn on and off the entanglement production procedure without turning on and off the measurement device which has practical advantages .now comes the essential idea .first we want to drive the system to the dfs , in which case the system will be in a classically correlated state .once in the dfs , the system is driven towards the maximally entangled bell state .this can be done by using only local unitary operations and the measurement of , however , our goal is to make use of jacobs protocol .let us introduce two encoded qubits : the first will represent the extent to which information is found within the two dfs . if it is in the state then the system is confined to , respectively .the second qubit contains the information encoded within the dfs .physical operations can be split into two categories : the ones which commute with and the ones which do not .the former operations will only affect the second encoded qubit ( as these operations leave the system inside the dfs ) and the latter will only affect the first encoded qubit .the basic idea is that we apply jacobs protocol to the encoded qubits .if we rapidly purify the first one , it means that we rapidly forced the system into the dfs .then we apply the same protocol to the second encoded qubit .this procedure can be implemented and purifying the second encoded qubit along a specific axis generates entanglement in the physical system .note that the fastest rate of purification does not necessarily provide the fastest rate of entanglement generation .the protocol described in the next section can also be used to provide entanglement generation . in the context of quantum computation and quantum information , conditional dynamics of qubit systems have gained considerable amount of attantion , as we saw in section [ sec : stabqubitdiscrete ] . and [ sec : puriffeedback ] .here we introduce another quite exotic - looking transformation : with the normalization factor , so simply squares the matrix elements .this transformation can be realized using basic steps involving feedback ; we will restrict ourselves to qubits here .assume we have two identical copies of the same state and consider the spins ( qubits ) pairwise : .now apply the well - known xor - gate to the pair which is where means addition mod 2 .the third and last step is easy again , however , this is the key to the nonlinearity : measure the spin of the second qubit along the -axis and keep the pair only if the result is `` down '' .the whole transformation can be written in a compact form : where .it can be easily checked that indeed .the fact that does not preserve the trace means that , with some finite probability , the transformation can fail .the procedure can be thought as a feedback process because it consists a filtering , based on a ( projective ) measurement record .there are possible generalizations of .for example , instead of qubits , we can use arbitrary dimensional hilbert spaces ( with the generalized xor gate introduced in ) .this strong feedback based , nonlinear transformation can be used to optimally distinguish between nonorthogonal states or to purify mixed states . as an example of the latter ,let us consider a qubit pair with density matrix . after we squared the density matrix elements with , we apply ( a rotation in the hilbert space ) with the parametrization and let us choose .one step of the whole dynamics becomes then =u(\mathcal{s}\rho)u^{\dagger}\ ] ] the goal is to use this transformation to restore one of the bell states which has been perturbed ( due to decoherence , for example ) .define the state as and assume that our initial state is which has a fidelity with the original state .figure [ fig : fid1 ] . is a plot of the fidelities at every iteration step .one can see that after even number of iterations the state converges to the target .this happens because is the part of the stable cycle the map .the length of this stable cycle is two , the other member being a state orthogonal to .the procedure also generates entanglement in this case .the initial fidelity is .after 20 steps , the iteration converges to the stable cycle .note , that the convergence is not neccesserily monotonic : after the second iteration we have .,width=340 ] where is the connection between this protocol and chaos ?this question was raised in and further analyzed in .let us go back to the one qubit case ( the transformation is still , defined in ( [ eq : ftrafo ] ) ) .let us choose the initial state to be a pure state which we can write in the riemann representation as where is known as the riemann sphere .it is straightforward to show that transforms a pure state into a pure state and the parameter transforms as what we obtained is a nonlinear map on the riemann sphere with one complex parameter .these maps have been studied in detail since the beginning of the 20th century by fatou , who studied particularly the map and later on by g. julia and b. mandelbrot , just to name a few who contributed to the subject .they showed that even the simplest nonlinear maps on complex numbers can show extremely rich structures .for example , the famous mandelbrot set emerges from the rather simple - looking map . is a quadratic rational map , thus its julia set - the set of irregular points - is non - vacous ( for a rigorous treatment of dynamics of complex maps , see ) .this is a usual definition for complex - valued maps to be considered chaotic .for example , consider the case with so .the julia set is trivial in this case : the unit circle . with a definition analogous to ( [ eq : lyapunov ] )we can calculate the lyapunov exponent and find that it is a positive value . in this sense, we can conclude that our projective measurement - based feedback protocol , if applied iteratively , can lead to true chaos in the mathematical sense .figure [ fig : julia1 ] . shows the rich structure of the julia set , using reproduced simulations .the program iterates the map for a given parameter and calculates the number of steps needed to reach the stable cycle for different initial states .one can observe the fractal - like structure which is a usual property of chaotic systems ; it shows that the convergence properties can change on arbitrarily small scales .one could also iterate mixed states and see that the purity follows irregular dynamics .the two qubit case is much more complicated to treat analytically , as the initial state space and the parameter space are considerably larger but in a suitable representation one find chaotic behaviour in entanglement as well .the following table summarizes some charateristics of the aforementioned proposals for feedback - induced chaos in quantum systems ( see also section [ sec : quantumchaos ] . ) . & measurement & classical lim . &stochastic & space & feedback + lloyd and slotince & weak & no ? & ? & ? &yes + habib et al . & continuous & yes & yes & & no + qubit dynamics & projective & ? & no & & yes + .red means fastest , green means slowest convergence to the stable cycle ( which can be proven to be the only stable cycle ) . in the blue domainsthe iteration does not converge under the criteria of the program , width=188 ]in this paper , some paradigms for quantum feedback control were reviewed . as quantum feedback control has been a rapidly growing research area at least for two decades now andtherefore is a huge field with extensive literature , the goal was not to survey it as a whole .rather , to give a fairly self - contained description of selected tasks which can be efficiently done using quantum feedback and to treat them in a consistent formalism .where it was relevant , comparisons to other control designs were made and we can conclude that for many problems , quantum feedback provides optimal results. however , some of these results do not take into account the delays which are inevitably present in an experimental setup . with practical considerations ,it is possible that quantum feedback , at least measurement - based quantum feedback loses its superiority against conventional methods .this is part of the reason why the field of coherent feedback networks and control is coming into the focus ( for a survey see ) .we can also conclude that quantum feedback is linked to fundamental theoretical questions and can institute novel forms of quantum chaos .in fact , in order to have a satisfactory ( and practically relevant ) quantum mechanical description of the system dynamics , we need the evolution of systems which are being measured. therefore the evolution of states is naturally conditioned on measurement results in any experimental setup which gives more understanding in the quantum - classical correspondence .* acknowledgements * the author acknowledges the support from sophie schirmer for the supervision , valuable discussions and suggestions and tamas kiss for contionous mentoring throughout the years . without themthis work could not have been completed .g. g. gillett , r. b. dalton , b. p. lanyon , m. p. almeida , m. barbieri , g. j. pryde , j. l. obrien , k. j. resch , s. d. bartlett , and a. g. white .experimental feedback control of quantum systems using weak measurements . , 104(080503 ) , 2010 .
in this review paper , we survey the main concepts and some of the recent developments in quantum feedback control . for consistency and clarity , essential ideas and notations in the theory of open quantum systems and quantum stochastic calculus , as well as continuous measurement theory are developed . we give a general description of quantum feedback control , set up a coherent model and compare it to open - loop designs . objectives which can be achieved by feedback , such as rapid state preparation and purification or entanglement generation are formulated and analyzed , based on the relevant literature . the connection between quantum feedback and quantum chaos is also described and unravelled which , apart from its theoretical curiosity , can shed more light on some of the intrinsic properties of this control paradigm . * key words * : quantum control , feedback control , coherent control , complex chaos * paradigms for quantum feedback control * + l. d. tth department of applied maths and theoretical physics , university of cambridge , wilberforce road , cambridge cb3 0wa , united kingdom primalight , department of electrical engineering , king abdullah university of science and technology , thuwal 23955 - 6900 , saudi arabia
a standard formulation of supervised learning starts with a parametrized class of mappings , a training set of desired input - output pairs , and a loss function measuring deviation of actual output from desired output .the goal of learning is to minimize the average loss over the training set .a popular minimization method is stochastic gradient descent . for each input in sequence ,the parameters of the mapping are updated in minus the direction of the gradient of the loss with respect to the parameters .here we are concerned with a class of mappings known as convolutional networks ( convnets ) .significant effort has been put into parallelizing convnet learning on gpus , as in the popular software packages caffe , torch and theano .convnet learning has also been distributed over multiple machines .however , there has been relatively little work on parallelizing convnet learning for single shared memory cpu machines .here we introduce a software package called znn , which implements a novel parallel algorithm for convnet learning on multi - core and many - core cpu machines .znn implements 3d convnets , with 2d as a special case .znn can employ either direct or fft convolution , and chooses between the two methods by autotuning each layer of the network .fft convolution was previously applied to 2d convnets running on gpus , and is even more advantageous for 3d convnets on cpus . as far as we know, znn is the first publicly available software that supports efficient training of sliding window max - pooling convnets , which have been studied by . there is related work on using xeon phifor supervised deep learning . and unsupervised deep learning .we define a convnet using a directed acyclic graph ( dag ) , called the _ computation graph _ ( fig .[ fig : convnet_dag ] ) .each node represents a 3d image , and each edge some image filtering operation .( 2d images are a special case in which one of the dimensions has size one . )if multiple edges converge on a node , the node sums the outputs of the filtering operations represented by the edges . for convenience, the discussion below will assume that images and kernels have isotropic dimensions , though this restriction is not necessary for znn .the image filtering operations are of the four following types .* convolution * a weighted linear combination of voxels within a sliding window is computed for each location of the window in the image .the set of weights of the linear combination is called the _kernel_. if the input image has size and the kernel has size , then the output image has size .image size decreases because an output voxel only exists when the sliding window is fully contained in the input image .the convolution is allowed to be sparse , meaning that only every image voxel ( in every dimension ) within the sliding window enters the linear combination .* max - pooling * divides an image of size into blocks of size , where is divisible by .the maximum value is computed for each block , yielding an image of size . *max - filtering * the maximum within a sliding window is computed for each location of the window in the image . for a window of size and an input image of size , the output image has size .3d max - filtering can be performed by sequential 1d max - filtering of arrays in each of the three directions . for each arraywe keep a heap of size containing the values inside the 1d sliding window . each element of the array will be inserted and removed at most once , each operation taking . for each position of the sliding windowthe top of the heap will contain the maximum value ..number of floating point operations ( flops ) required by a layer with nodes that all perform the same nonlinear filtering operation ( max - pooling , max - filtering , or transfer function ) . [ cols="<,<,<,<",options="header " , ] the 3d convnets contained four fully - connected convolutional ( c ) layers with kernels , each followed by a transfer function layer ( t ) with rectified linear function , and two max - filtering ( m ) layers .each convolutional layer the sequence of layer types was ctmctmctct .the output patch size was .the 2d convnets contained 6 fully - connected convolutional layers with kernels , each followed by rectified linear transfer function layer ( t ) , and two max - filtering layers ( 2nd and 4th ) .the sequence of layer types was ctmctmctctctct .the output patch size was .the znn measurements were performed by first running the gradient learning algorithm for 5 warm - up rounds and then averaging the time required for the next 50 rounds .the gpu measurements were averaged over 100 rounds .2d convnets were implemented as a special case of 3d convnets , by setting one of the dimensions to have size one .the width of the convnets was varied as described below .fft convolution was employed for 2d , and direct convolution for 3d to illustrate the use of both methods ; reversing this yields similar results .other network architectures and kernel sizes also yield similar results. + fig .[ fig:2dspeedups_threads ] shows speedup attained by various cpus as a function of two parameters , number of worker threads and network width .each graph shows the result of varying the number of workers while network width is held fixed . to achieve near maximal possible speedup znn requires sufficiently wide networks ( for multicore cpus and for the manycore cpu ) and sufficiently many worker threads ( number of hyperthreads for multicore and number of hardware threads for manycore ) .the value of the maximal speedup is equal to the number of cores or a bit larger ( maximal height of graphs ) . for a wide network on multicore cpus ,speedup increases linearly until the number of worker threads equals the number of cores .after that the increase continues at a slower rate . for wide networks on xeon phi , speedup increases linearly until the number of worker threads equals the number of cores , then more slowly until double that number , and then even slower until the number of hardware threads .the maximal achieved speedups for networks of different widths are shown in figs .[ fig:2dspeedups ] and [ fig:3dspeedups ] .while the preceding results show that znn can efficiently utilize cpus , it is also important to know how the resulting performance compares to gpu implementations of convnet learning . therefore , we benchmarked znn against caffe and theano , two popular gpu implementations .comparison can be tricky because cpu and gpu implementations by definition can not be run on the same hardware .we chose to run caffe and theano on a titan x gpu , and znn on an core amazon ec2 instance ( c4.8xlarge ) .we chose this particular comparison , because the alternatives seemed unfair .for example , we could have run znn on specialized hardware with more cpu cores than the ec2 instance .this comparison seemed unfair because the specialized hardware would have been much more costly than titan x and less accessible than amazon ec2 .also , we could have used gpu instances from amazon ec2 , but these are currently much slower than titan x ( or more on our benchmarks ) and have half the onboard ram . for caffe , both default and cudnn implementations were used . for 3d convnetswe only used theano , as the official release of caffe still does not support 3d convnets .our caffe and theano code is publicly available in the znn repository .znn used fft convolution for both 2d and 3d , as this was found to be optimal by the auto - tuning capability of znn .caffe and theano used direct convolution . , , and respectively .where caffe data is missing , it means that caffe could not handle networks of the given size.,scaledwidth=50.0% ] our convnets contained 6 fully - connected convolutional ( c ) layers , each followed by a rectified linear transfer function layer ( t ) , and two max - pooling ( p ) layers , either or .the sequence of the layer types was ctpctpctctctct .all networks had width , while the sizes of the kernels and the output patch varied .all benchmark times were for `` sparse training , '' meaning that the convnet is used to produce predictions for pixels in the output patch that form a lattice with period 4 in every dimension .the loss of predicted output pixels is due to the two layers of max - pooling .as noted before , znn can also perform `` dense training , '' meaning that the convnet is used to produce predictions for every pixel in the output patch by applying the convnet to a window that slides across every `` valid '' location in the input patch . requiring caffe or theano to perform dense trainingcould have been accomplished by computing sparse outputs in 2d and in 3d to assemble a dense output .this method is very inefficient and would have been no contest with znn . the comparison of 2d convnets is shown in fig . [fig : vsgpu2d ] .znn is faster than caffe and theano for sufficiently large kernels ( or larger ) .this makes sense because fft convolution ( znn ) is more efficient than direct convolution ( caffe and theano ) for sufficiently large kernels .such large kernels are not generally used in practice , so znn may not be competitive with gpu implementations for 2d networks .on the other hand , znn opens up the possibility of efficiently training networks with large kernels , and these might find some practical application in the future . the comparison of 3d convnets is shown in fig . [fig : vsgpu3d ] .znn is comparable to theano even for modest kernel sizes of and outperforms theano for kernel sizes of and greater .such kernel sizes are currently relevant for practical applications .again the benchmark makes sense , because we expect the crossover point for complexity of fft vs. direct convolution to occur for smaller ( linear ) kernel sizes in 3d . , and .,scaledwidth=40.0% ] working memory is another computational resource that is important for training convnets .given the limited amount of onboard gpu memory , we were unable to use theano to train 3d networks with kernel sizes larger than .we were also unable to use caffe to train many 2d networks ( see missing bars in fig . [fig : vsgpu2d ] ) .znn enables training of larger networks mostly because a typical cpu system has much more ram than even a top gpu .titan x , for example , has just 12 gb of onboard ram .additionally , znn can achieve even higher speed by using extra ram space , as in the case of fft memoization . when using fft - based convolutions , with the memoization disabled , znn is more efficient in its usage of ram than the proposed gpu methods .the memory overhead of the methods proposed in could be very high as it is proportional to the number of kernels in a layer .in contrast znn s memory overhead is proportional to the number of workers .znn is implemented in c++ and is publicly available under the gpl2 license ( _ https://github.com/zlateski/znn-release_ ) .it can use either fftw or intel mkl for ffts and either provided code or intel mkl libraries for direct convolution . using fftw instead of mkl yields same scalability but lower absolute performances due to the differences in single thread performances of the two libraries .the repository also provides alternative scheduling strategies such as simple fifo or lifo as well as some more complex ones based on work stealing .the alternative scheduling strategies achieve noticeably lower scalability than the one proposed in the paper for most networks .however , some very specific networks might benefit from alternative scheduling algorithms .future work can include automatic detection of the best scheduling strategy .znn achieves high performances by efficiently utilizing the available cpus .we expect an increase in the number of cores per chip ( or xeon phicard ) in the future , making znn even more practical .in fact , we have already used znn to achieve state of the art results in boundary detection and computation of dendritic arbor densities . having a large amount of ram available to the cpu, znn can efficiently train very large convnets with large kernels .znn allows for easy extensions and can efficiently train a convnet with an arbitrary topology , allowing for new research . unlike the znn s task parallelization model ,the current gpu implementations employ simd parallelism to perform computation on one whole layer at a time , thus limiting the network structure . mainly , the computation is parallelized such that a single thread computes the value of a single voxel of an output image. libraries like cudnn provide optimized primitives for fully connected convolutional layers by reducing all the required convolutions in the layer to a matrix multiplication , which is then parallelized on the gpu .extending the functionality requires the user to provide a parallelized implementation of the new layer type , which typically requires great knowledge of gpu programming , and might take a long time .contrary to that , znn s task parallelism allows for easy extensions by simply providing serial functions for the forward and backward pass , as well as the gradient computation , if required .znn s repository contains some sample extensions providing functionality of _ dropout _ and _ multi - scale _ networks .y. jia , e. shelhamer , j. donahue , s. karayev , j. long , r. girshick , s. guadarrama , and t. darrell , `` caffe : convolutional architecture for fast feature embedding , '' in _ proceedings of the acm international conference on multimedia _ , pp .675678 , acm , 2014 .j. bergstra , o. breuleux , f. bastien , p. lamblin , r. pascanu , g. desjardins , j. turian , d. warde - farley , and y. bengio , `` theano : a cpu and gpu math expression compiler , '' in _ proceedings of the python for scientific computing conference ( scipy ) _ , vol . 4 , p.3 , austin , tx , 2010 .j. dean , g. corrado , r. monga , k. chen , m. devin , m. mao , a. senior , p. tucker , k. yang , q. v. le , _ et al ._ , `` large scale distributed deep networks , '' in _ advances in neural information processing systems _ , pp . 12231231 , 2012 .j. masci , a. giusti , d. ciresan , g. fricout , and j. schmidhuber , `` a fast learning algorithm for image segmentation with max - pooling convolutional networks , '' in _ image processing ( icip ) , 2013 20th ieee international conference on _ , pp . 27132717 , ieee , 2013 .p. sermanet , d. eigen , x. zhang , m. mathieu , r. fergus , and y. lecun , `` overfeat : integrated recognition , localization and detection using convolutional networks , '' _ arxiv preprint arxiv:1312.6229 _ , 2013 .l. jin , z. wang , r. gu , c. yuan , and y. huang , `` training large scale deep neural networks on the intel xeon phi many - core coprocessor , '' in _ proceedings of the 2014 ieee international parallel & distributed processing symposium workshops _ , ipdpsw 14 , ( washington , dc , usa ) , pp . 16221630 , ieee computer society , 2014 .d. ciresan , a. giusti , l. m. gambardella , and j. schmidhuber , `` deep neural networks segment neuronal membranes in electron microscopy images , '' in _ advances in neural information processing systems _ , pp . 28432851 , 2012 .m. m. michael and m. l. scott , `` simple , fast , and practical non - blocking and blocking concurrent queue algorithms , '' in _ proceedings of the fifteenth annual acm symposium on principles of distributed computing _, pp . 267275 , acm , 1996 .m. helmstaedter , k. l. briggman , s. c. turaga , v. jain , h. s. seung , and w. denk , `` connectomic reconstruction of the inner plexiform layer in the mouse retina , '' _ nature _ , vol .500 , no .7461 , pp . 168174 , 2013 .u. smbl , a. zlateski , a. vishwanathan , r. h. masland , and h. s. seung , `` automated computation of arbor densities : a step toward identifying neuronal cell types , '' _ frontiers in neuroanatomy _ , vol . 8 , 2014 .n. srivastava , g. hinton , a. krizhevsky , i. sutskever , and r. salakhutdinov , `` dropout : a simple way to prevent neural networks from overfitting , '' _ the journal of machine learning research _ , vol . 15 , no . 1 , pp . 19291958 , 2014 .
convolutional networks ( convnets ) have become a popular approach to computer vision . it is important to accelerate convnet training , which is computationally costly . we propose a novel parallel algorithm based on decomposition into a set of tasks , most of which are convolutions or ffts . applying brent s theorem to the task dependency graph implies that linear speedup with the number of processors is attainable within the pram model of parallel computation , for wide network architectures . to attain such performance on real shared - memory machines , our algorithm computes convolutions converging on the same node of the network with temporal locality to reduce cache misses , and sums the convergent convolution outputs via an almost wait - free concurrent method to reduce time spent in critical sections . we implement the algorithm with a publicly available software package called znn . benchmarking with multi - core cpus shows that znn can attain speedup roughly equal to the number of physical cores . we also show that znn can attain over 90x speedup on a many - core cpu ( xeon phiknights corner ) . these speedups are achieved for network architectures with widths that are in common use . the task parallelism of the znn algorithm is suited to cpus , while the simd parallelism of previous algorithms is compatible with gpus . through examples , we show that znn can be either faster or slower than certain gpu implementations depending on specifics of the network architecture , kernel sizes , and density and size of the output patch . znn may be less costly to develop and maintain , due to the relative ease of general - purpose cpu programming . = 10000
grid boundaries pose major difficulties in current computational efforts to simulate 3-dimensional black holes by conventional cauchy evolution schemes .the initial - boundary value problem for einstein s equations consists of the evolution of initial cauchy data on a spacelike hypersurface and boundary data on a timelike hypersurface .this problem has only recently received mathematical attention .friedrich and nagy have given a full solution for a hyperbolic formulation of the einstein equations based upon a frame decomposition in which the connection and curvature enter as evolution variables . because this formulation was chosen to handle mathematical issues rather than for ease of numerical implementation , it is not clear how the results translate into practical input for formulations on which most computational algorithms are based .the proper implementation of boundary conditions depends on the particular reduction of einstein s equations into an evolution system and the choice of gauge conditions .the purpose of this paper is to elucidate , in a simple context , such elementary issues as : ( i ) which variables can be freely specified on the boundary ; ( ii ) how should the remaining variables be updated in a computational scheme ; ( iii ) how can the analytic results be implemented as a computational algorithm .for this purpose , we consider the evolution of the linearized einstein s equations in harmonic coordinates and demonstrate how a robustly stable and highly accurate computational evolution can be based upon a proper mathematical formulation of the initial - boundary value problem .harmonic coordinates were used to obtain the first hyperbolic formulation of einstein s equations . for a full account of hyperbolic formulations of general relativity .while harmonic coordinates have also been widely applied in carrying out analytic perturbation expansions , they have had little application in numerical relativity , presumably because of the restriction in gauge freedom .however , there has been no ultimate verdict on the suitability of harmonic coordinates for computation . in particular, their generalization to include gauge source functions appears to offer flexibility comparable to the explicit choice of lapse and shift in conventional numerical approaches to general relativity .there is no question that harmonic coordinates offer greater computational efficiency than any other hyperbolic formulation . for a recent application to the study of singularities in a space - time without boundary .here we use a reduced version of the harmonic formulation of the field equations .this allows us retain a symmetric hyperbolic system , and apply standard boundary algorithms , in a way that is consistent with the propagation of the constraints .we show , on an analytic level , that this leads to a well posed initial - boundary problem for linearized gravitational theory .our computational results are formulated in terms of a cartesian grid based upon background minkowskian coordinates .robustly stable evolution algorithms are obtained for plane boundaries aligned with the cartesian coordinates , which is the standard setup for three dimensional evolution codes .similar computational results , based upon long evolutions with random initial and boundary data , were previously found for the linearized version of the arnowitt - deser - misner formulation ( adm ) of the field equations , where the lack of a hyperbolic formulation required a less systematic approach which had no obvious generalization to other boundary shapes . in this paper, we also attain robust stability for spherical boundaries which are cut out of the cartesian grid in an irregular piecewise cubic fashion .this success gives optimism that the methods can applied to such problems as black hole excision and cauchy - characteristic matching , where spherical boundaries enter in a natural way .conventions : we use greek letters for space - time indices and latin letters for spatial indices , e.g. for standard minkowski coordinates .linear perturbations of a curved space metric about the minkowski metric are described by with similar notation for the corresponding curvature quantities , e.g. the linearized riemann tensor and linearized einstein tensor .indices are raised and lowered with the minkowski metric , with the result that .boundaries in the background geometry are described in the form with the spatial coordinates decomposed in the form , where the directions span the space tangent to the boundary .the linearized einstein tensor has the form where and we analyze these equations in standard background minkowskian coordinates .to linearized accuracy , in terms of the curved space connection associated with and the condition defines a linearized harmonic gauge .the diffeomorphisms of the curved spacetime induce equivalent metric perturbations according to the harmonic subclass of gauge transformations where the vector field satisfies .the linearized curvature tensor as well as the linearized einstein equations , are gauge invariant .we introduce a cauchy foliation in the perturbed spacetime such that it reduces to an inertial time slicing in the background minkowski spacetime .the unit normal to the cauchy hypersurfaces is given , to linearized accuracy , by the choice of an evolution direction , with unit lapse and shift in the background minkowski spacetime , defines a perturbative lapse and perturbative shift .harmonic evolution consists of solving the einstein s equations , subject to the harmonic conditions .this formulation led to the first existence and uniqueness theorems for solutions to the nonlinear einstein equations by considering them as a set of 10 nonlinear wave equations .in the linearized case , einstein s equations in harmonic coordinates reduce to ten flat space wave equations so that their mathematical analysis is simple . the cauchy data and at determine ten unique solutions of the wave equation , in the appropriate domain of dependence .these solutions satisfy so that they satisfy einstein s equations provided and , which can be arranged by choosing initial cauchy data satisfying constraints . for a detailed discussion , .although this standard harmonic evolution scheme led to the first existence and uniqueness theorem for einstein s equations , it is not straightforward to apply to the initial - boundary value problem .the ten wave equations for require ten individual pieces of boundary data in order to determine a unique solution .given initial data such that and at , as described above , the resulting solution satisfies the linearized einstein equations only in the domain of dependence of the cauchy data . in order that the solution of einstein s equations extend to the boundary ,it is necessary that at the boundary .unfortunately , there is no version of boundary data for , e.g. dirichlet , neumann or sommerfeld data for the ten individual components , from which can be calculated at the boundary .here we consider reduced versions of the harmonic evolution scheme in which only six wave equations are solved and this problem does not arise .these reduced harmonic formulations are presented below . a linearized evolution scheme for the harmonic einstein system ,can be based upon the six wave equations along with the four harmonic conditions because of the harmonic conditions , this system satisfies the spatial components of the linearized einstein s equations . as a result of the linearized bianchi identities , or linearized hamiltonian constraint and linearized momentum constraints are also satisfied , throughout the domain of dependence of the cauchy data , provided that they are satisfied at the initial time .this constrains the initial values of according to and where .then , if these constraints are initially satisfied , the reduced harmonic einstein system determines a solution of the linearized einstein s equations .the well - posedness of the system follows directly from the well - posedness of the wave equations for .the auxiliary variables satisfy the ordinary differential equations where enters only in the role of source terms .these differential equations do not affect the well - posedness of the system and have unique integrals determined by the initial values of .the reduced harmonic ricci system consists of the six wave equations along with the four harmonic conditions ( [ eq : auxeqs ] ) , which can be re - expressed in the form where we have set .together these equations imply that the spatial components of the perturbed ricci tensor vanish , .in addition , the bianchi identities imply that the remaining components satisfy where , in terms of the ricci tensor , and .together with the evolution equations , the bianchi identities imply that the hamiltonian constraint satisfies the wave equation .if the hamiltonian and momentum constraints are satisfied at the initial time , then also vanishes at the initial time so that the uniqueness of the solution of the wave equation ensures the propagation of the hamiltonian constraint . in turn , eq .( [ eq : momprop ] ) then ensures that the momentum constraint is propagated .thus the reduced harmonic ricci system of six wave equations and four harmonic equations leads to a solution of the linearized einstein s equations for initial cauchy data satisfying the constraints .the harmonic ricci system takes symmetric hyperbolic form when the wave equations are recast in first differential order form .thus the system is well posed .the formulation of the harmonic ricci system as a symmetric hyperbolic system and the description of its characteristics is given in appendix [ app : ricci ] .although well - posedness of the analytic problem does not the guarantee the stability of a numerical implementation it can simplify its attainment .the harmonic einstein and ricci systems are special cases of a one parameter class of reduced harmonic systems for the variable which satisfying the wave equations and the harmonic conditions , where .this system is symmetric hyperbolic when the auxiliary system is symmetric hyperbolic .this can be analyzed by setting in the auxiliary system , which then takes the form and implies the auxiliary system is symmetric hyperbolic when eq .( [ eq : subwave ] ) is a wave equation whose wave speed is positive .this is satisfied for and . in the range ,the wave speed is faster than the speed of light . only for the case , the harmonic ricci system , is the wave speed of the auxiliary system equal to the speed of light .the auxiliary system for the reduced harmonic einstein case has a well posed initial - boundary value problem but represents a borderline case .this could adversely affect the development of a stable code based upon a nonlinear version of the reduced harmonic einstein system .consider the initial - boundary problem consisting of evolving cauchy data prescribed in a set at time lying in the half - space with data prescribed on a set on the boundary .harmonic evolution takes its simplest form when einstein equations are expressed as a second differential order system .however , in order to apply standard methods it is necessary to recast the problem as a first order symmetric hyperbolic system .then the theory determines conditions on the boundary data for a well posed problem in the future domain of dependence . here is the maximal set of points whose past directed inextendible characteristic curves all intersect the union of and before leaving .appendix [ app : ricci ] describes the symmetric hyperbolic description of boundary conditions for the 3-dimensional wave equation .the basic ideas and their application to the initial - boundary value problem are simplest to explain in terms of the 1-dimensional wave equation , as follows .see appendix [ app : ricci ] for the analogous treatment of the 3-dimensional case .our presentation is based upon the formulation of maximally dissipative boundary conditions , the approach used by friedrich and nagy in the nonlinear case .an alternative description of boundary conditions for symmetric hyperbolic systems is given in ref . and for linearized gravity in ref . .the one - dimensional wave equation can be recast as the first order system of evolution equations where we have introduced the auxiliary variables and .given initial data at time subject to the constraint , , these equations determine a solution of the wave equation .note that the constraint is propagated by the evolution equations . in coordinates the system ( [ eq : swe.first])-([eq : swe.last ] )has the symmetric hyperbolic form where the solution consists of the column matrix and in our case the source term but otherwise it plays no essential role in the analysis of the system .the contraction of eq .( [ shscalar ] ) with the transpose give the flux equation this can be used to provide an estimate on the norm for establishing a well posed problem , in the half - space with boundary at , provided the flux arising with the normal component of satisfies the inequality this inequality determines the allowed boundary data for a well posed initial - boundary value problem . as expected from knowledge of the characteristics of the wave equation ,the normal matrix has eigenvalues , with the corresponding eigenvectors these eigenvectors are associated with the variables re - expressing the solution vector in terms of the eigenvectors , the inequality ( [ eq : inequality ] implies that homogeneous boundary data must take the form where the parameter satisfies . the component , corresponding to the kernel of , propagates directly up the boundary and can not be prescribed as boundary data .non - homogeneous boundary data can be given in the form where is an arbitrary function representing the free boundary data at .( [ eq : bdrydata ] ) shows how the scalar wave equation accepts a continuous range of boundary conditions .the well - known cases of sommerfeld , dirichlet or neumann boundary data are recovered by setting , or , , respectively .note that there are consistency conditions at the edge .for instance , dirichlet data corresponds to specifying on the boundary and this must be consistent with the initial data for .the harmonic einstein system consists of the six wave equations ( [ eq : gammawave ] ) and the four harmonic conditions ( [ eq : auxeqs ] ) . since the wave equations for are independent of the auxiliary variables , the well - posedness of the initial - boundary value problem for follows immediately .furthermore , since the harmonic conditions propagate the auxiliary variables up the boundary by ordinary differential equations , the harmonic einstein system has a well posed initial - boundary value problem . a unique solution in the appropriate domain of dependence is determined by the initial cauchy data and at , the initial data at the edge and the boundary data at given in any of the forms described in appendix [ app : ricci ] ( e.g. dirichlet , neumann , sommerfeld ) .a solution of the linearized harmonic einstein system satisfies = 0 . as a result the bianchi identities imply and so that the constraints are satisfied provided the constraint eq s .( [ eq : auxil1 ] ) and ( [ eq : auxil2 ] ) are satisfied at . the free boundary data for this system consists of six functions .however , as shown in ref . , the vacuum bianchi identities satisfied by the weyl tensor imply that only two independent pieces of weyl data can be freely specified at the boundary .we give the corresponding analysis for the linearized einstein system in appendix [ app : weyl ] .this makes it clear that only two of the six pieces of metric boundary data are gauge invariant .this is in accord with the four degrees of gauge freedom consisting of the choice of linearized lapse ( one free function ) and linearized shift ( three free functions ) .a linearized evolution requires a unique lapse and shift , whose values can be specified explicitly as space - time functions or specified implicitly in terms of initial and boundary data subject to dynamical equations . in the case of a harmonic gauge , in order to assess whether explicit space - time specification of the lapse and shift is advantageous for the purpose of numerical evolution , it is instructive to see how it affects the initial - boundary value problem for the reduced harmonic einstein system . in the linearized harmonic formulation without boundary , a gauge transformation ( [ eq : gauge ] ) to a shift - free gauge is always possible within the domain of dependence of the initial cauchy data . in the presence of a boundary , consider a gauge transformation with , so that for harmonic evolution of constrained initial data , both and satisfy the wave equation . at , choose cauchy data for satisfying and on the boundary , require that satisfy then has vanishing cauchy data at , vanishing dirichlet boundary data at and satisfies the wave equation , so that .thus , for evolution of constrained data with a boundary , a shift - free harmonic gauge is possible .however , in this gauge , boundary data for all 6 components of can no longer be freely specified since the harmonic condition implies this relates neumann data for to dirichlet data for and would complicate any shift - free numerical evolution scheme . as an example , one could freely specify dirichlet boundary data for the 3 components and ( i ) obtain neumann boundary data from the -components of eq .( [ eq : noshift ] ) , ( ii ) evolve in terms of initial cauchy data and ( iii ) obtain neumann boundary data from the -component of eq .( [ eq : noshift ] ) .note the nonlocality of step ( ii ) , which would have to be carried out `` on the fly '' during a numerical evolution .the requirement that the shift vanish reduces the free boundary data to three components .it is also possible to eliminate an additional free piece of data by choosing a unit lapse , i.e. setting .suppose the shift has been set to zero , so that the harmonic condition implies .consider a gauge transformation , where satisfies so that the shift remains zero .for harmonic evolution of constrained data , both and satisfy the wave equation . at ,choose cauchy data for satisfying and ( where we assume the cauchy data is given on a non - compact set so that there are no global obstructions to a solution ) .on the boundary , require satisfy then because it is a solution of the wave equation with vanishing cauchy data and dirichlet boundary data .( alternatively , the lapse can be gauged to unity by a transformation satisfying , so that the harmonic source function still drops out of eq .( [ eq : einstein ] ) for the einstein tensor . ) a unit lapse and zero shift implies that so that can not be freely specified at the boundary . coupled with our previous results , imposition of a unit lapse and zero - shift reduces the free boundary data to the two trace - free transverse components , in accord with the two degrees of gauge - free radiative freedom associated with the weyl tensor .( in addition , the initial cauchy data must satisfy eq .( [ eq : noshift ] ) , and ) .a similar result arises in the study of the unit - lapse zero - shift initial - boundary value problem for the linearized adm equations .however , in the case of harmonic evolution , it is clear that explicit specification of the lapse and shift leads to a more complicated initial - boundary value problem .it is more natural to retain the freedom of specifying 6 pieces of boundary data , which then determine the lapse and shift implicitly during the course of the evolution .the reduction of the free boundary data can be accomplished by other gauge conditions on the boundary , which are not directly based upon the lapse and shift .an example , which plays a central role in the friedrich - nagy formulation , is the specification of the mean curvature of the boundary . to linearized accuracy ,the unit outward normal to the boundary at is the associated mean extrinsic curvature is given to linear order by although the extrinsic curvature of a planar boundary vanishes in the background minkowski space , the linear perturbation of the background induces a non - vanishing linearized extrinsic curvature tensor . under a gauge transformation induced by the , the mean curvature transforms according to a gauge deformation of the boundary in the embedding space - timemakes it possible to obtain any mean curvature by solving a wave equation intrinsic to the boundary . in this respect ,the mean extrinsic curvature of the boundary is pure `` boundary gauge '' and can be specified to eliminate one degree of gauge freedom . when the harmonic conditions are satisfied , the mean curvature of the boundary reduces to .this discussion shows that there are various ways that the six free pieces of boundary data can be restricted by gauge conditions .such restrictions can be important for an analytic understanding of the initial - boundary problem but their usefulness for numerical simulation is a separate issue , especially in applications where the boundary does not align with the numerical grid as discussed in sec .[ sec : implem ] .the underlying equations of the reduced harmonic ricci system are the six wave equations ( [ eq : hwave ] ) and the four harmonic conditions ( [ eq : gauge_t ] ) and ( [ eq : gauge_i ] ) . the symmetric hyperbolic formulation of this system and the analysis of its characteristics is given in appendix [ app : ricci ] .the variables consist of where , , and . in a well posed initial - boundary value problem, there are seven free functions that may be specified at the boundary .for example , in the analogue of the dirichlet case , the free boundary data consists of and . in the context of the second order differential form given by eq s .( [ eq : hwave ] ) , with the harmonic conditions ( [ eq : gauge_t ] ) and ( [ eq : gauge_i ] ) , the boundary data remains the same , e.g. dirichlet data and .this determines the evolution of by the wave equation , which then provides source terms for the symmetric hyperbolic subsystem ( [ eq : gauge_t ] ) and ( [ eq : gauge_i ] ) .this subsystem implies that , so that the evolution of is also governed by the wave equation , with initial cauchy data and where is provided by the initial cauchy data via eq .( [ eq : gauge_i ] ) ) .the evolution of is then obtained by integration of eq .( [ eq : gauge_i ] ) . unlike the reduced harmonic einstein system where the constraints propagate up the boundary by ordinary differential equations , the initial - boundary value problem for the reduced harmonic ricci system does not necessarily satisfy einstein s equations even if the constraints are initially satisfied .the bianchi identities ( [ eq : momprop ] ) imply ( [ eq : hamprop ] ) so that both and would initially vanish but , since satisfies the wave equation , would vanish throughout the evolution domain only if it vanished on the boundary .in that case , eq . ( [ eq : momprop ] ) would imply that the momentum constraints were also satisfied throughout the evolution domain .thus evolution of constrained initial data for the harmonic ricci system yields a solution of the einstein equations if and only if the hamiltonian constraint is satisfied on the boundary .this is equivalent to requiring that on the boundary .if the evolution equations are satisfied , we can express this in the form where and .this allows formulation of the following well posed initial - boundary value problem for a solution satisfying the constraints .we prescribe initial cauchy data that satisfies the constraints for , , and , and free boundary data for and .the system of wave equations then determines .this allows integration of eq .( [ eq : boxphi ] ) on the boundary to obtain dirichlet boundary values for determination of as a solution of the wave equation .the remaining fields and are then determined as a symmetric hyperbolic subsystem .note that the boundary constraint ( [ eq : boxphi ] ) reduces the free boundary data from seven independent functions ( for unconstrained solutions ) to six , in agreement with the free boundary data for solutions of the reduced harmonic einstein system .numerical error is an essential new factor in the computational implementation of the preceding analytic results .the initial cauchy data can not be expected to obey the constraints exactly . in particular, machine roundoff error always produces an essentially random component to the data which , in a linear system , evolves independently of the intended physical simulation .it is of practical importance that a numerical evolution handle such random data without producing exponential growth ( and without an inordinate amount of numerical damping ) .we designate as _ robustly stable _ an evolution code for the linearized initial - boundary problem which does not exhibit exponential growth for random ( constraint violating ) initial data and random boundary data .this is the criterion previously used to establish robust stability for adm evolution with specified lapse and shift .we test for robust stability using the 3-stage methodology proposed for evolution - boundary codes in ref .the tests check that the norm of the hamiltonian constraint does not exhibit exponential growth under the following conditions . *_ stage i : _ evolution on a 3-torus with random initial cauchy data . *_ stage ii : _ evolution on a 2-torus with plane boundaries , i.e. ] , with time levels } = n \ , \delta t ] . in order to obtain compact finite difference stencils for imposing boundary conditions ,the fields ( or ) are represented on staggered grids at staggered time - levels , where }_{[i+1/2 , j+1/2 , k+1/2 ] } = f(t^{[n+1/2 ] } , x_{[i+1/2 ] } , y_{[j+1/2 ] } , z_{[k+1/2]}) ] requires } ] , which , in turn , allows update of } ] or } ] , etc .the boundary data consist of on the faces , edges and corners of the cube .the field can then be updated at all staggered grid points inside the cube , including those neighboring the boundary .for instance , update of } ] , all of which are on or inside the boundary .similarly , evolution of can be carried out at all interior grid points without further boundary data .robust stability of the evolution - boundary algorithm is demonstrated by the graph of the hamiltonian constraint in fig .[ fig : stage.iii ] .the linear growth results from momentum constraint violation in the initial data .* stage i * evolution of is carried out identically as the evolution of in the einstein system ( see eq .( [ eq : fde.gamma^ij ] ) ) .the fields and are represented on the integer grid while is represented on a half - integer grid staggered in space and in time .thus the evolution equations ( [ eq : gauge_t ] ) and ( [ eq : gauge_i ] ) for and have finite difference form }_{[i , j , k ] } - \phi^{[n]}_{[i , j , k]}}{\delta t } + \left ( \partial_i h^{it } \right)^{[n+1/2]}_{[i , j , k ] } + \frac{1}{2 } \delta_{ij } \frac{h^{ij[n+1]}_{[i , j , k ] } - h^{ij[n]}_{[i , j , k]}}{\delta t } & = & 0 \\ \label{eq : fde.h^it }\frac { h^{it[n+1/2]}_{[i+1/2,j+1/2,k+1/2 ] } - h^{it[n-1/2]}_{[i+1/2,j+1/2,k+1/2 ] } } { \delta t } + \left ( \partial_i h^{it } - \frac{1}{2 } \delta_{jk } \partial^i h^{jk } + \partial_j h^{ij } \right)^{[n]}_{[i+1/2,j+1/2,k+1/2 ] } & = & 0 , \end{aligned}\ ] ] where the spatial derivative terms are computed according to eq .( [ eq : fde.f_x ] ) .* stage ii * _ unconstrained plane boundary_. as shown in sec .[ sec : hrs ] , seven free functions can be prescribed as unconstrained boundary data for and in the reduced harmonic ricci system .again let the boundary be defined by the -th grid point , with for interior points .then the boundary data } ] allows the evolution algorithm to be applied to update and at all interior points , e.g. to update } ] , which in turn allows update of at all interior points ._ constrained plane boundary_. conservation of the constraints in the reduced harmonic ricci system requires that the hamiltonian constraint be enforced at the boundary . in order to obtain a finite difference approximation to a solution of einstein s equations ,the unconstrained evolution - boundary algorithm must be modified to enforce the hamiltonian constraint on the boundary . on a boundary, we accomplish this , in accord with the discussion in sec .[ sec : hrs ] , by prescribing freely the function and the five components of the traceless symmetric tensor . the missing ingredient , is updated at the boundary according to eq .( [ eq : boxphi ] ) . in the finite difference algorithm , in order to be able to apply eq .( [ eq : boxphi ] ) via a centered three - point stencil , we introduce a guard point at } ] , and that the boundary data } ] , we use the following evolution - boundary algorithm to compute these fields at } ] at all grid points within the numerical domain of dependence of the data known at } ] by prescribing boundary values for and setting . * \(iii ) at the guard point we update the fields }_{[k_0 + 1 ] } = h^{ij[n]}_{[k_0 + 1 ] } + \left(\delta t\right ) \ , \left(\partial_t h^{ij}\right)^{[n+1/2]}_{[k_0 + 1]} ] using the field - equation , written in the finite - difference form }_{[k_0 ] } - 2 \ , h^{ij[n]}_{[k_0 ] } + h^{ij[n-1]}_{[k_0 ] } } { \delta t^2 } = \left(\nabla^2 h^{ij}\right)^{[n]}_{[k_0 ] } .\ ] ] * \(v ) at the boundary point we compute the boundary values }_{[k_0]} ] and }_{[k_0]} ] .* \(vii ) we assign boundary data for }_{[k_0]} ] and } ] requires the values of at the points } ] , where . at this set of pointswe obtain values for via eq .( [ eq : fde.gamma^ij ] ) , with provided in a `` guard - shell '' , where .the radius of the spherical boundary of the reduced harmonic einstein system is related to the linear size of the computational grid by this ensures that all guard points fall inside the domain ^ 3 ] at all non - staggered grid points within the sphere of radius .* \(ii ) we provide boundary data } ] within the same spherical shell .* \(iii ) we provide boundary data } ] inside the sphere of radius according to eq .( [ eq : gauge_t ] ) , and update the fields } ] , where the boundary data is provided .the quantity is defined by two conditions : ( i ) the fields can be updated according to eq .( [ eq : fde.box_hij ] ) at all non - staggered grid points }^2+y_{[j]}^2+z_{[k]}^2 } < r + \delta r_0 + \delta r_1 ] at all non - staggered grid points within the sphere or radius . *\(ii ) we update the fields } ] via eq .( [ eq : fde.box_hij ] ) .then we update the field } ] and } ] at the same set of grid points .* \(iii ) we provide boundary data } ] , then we update } = h^{ij[n ] } + \delta t \ , \left(\partial_t h^{ij } \right)^{[n+1/2]} ] at the set of boundary points }^2+y_{[j]}^2+z_{[k]}^2 } < r + \delta r_0 ] and } ] .the graphs of the hamiltonian constraint in fig .[ fig : stage.iv ] illustrate robust stability for a spherical boundary for the reduced harmonic einstein system and for the reduced harmonic ricci system with constrained boundary data .comparison of figs .[ fig : stage.iii ] - [ fig : stage.iv ] shows that there is no significant difference between the stage iii and stage iv performances in terms of numerical stability . in order to calibrate the performance of the algorithms we carried out convergence tests based upon analytic solutions constructed from a superpotential symmetric in and and antisymmetric in ] , such that . as a result of these symmetry properties ,the tensor is symmetric and satisfies the linearized harmonic einstein equations and . in our first testbedwe choose as a superposition of two solutions , with the remaining independent components of set to zero .the solution is defined by }{\omega_a^2 } , \quad i \neq j\end{aligned}\ ] ] and is defined by } { \omega_b^2 } , \quad b^{ij } = b^{tt } = 0 .\label{eq : ctest.plwave - last}\end{aligned}\ ] ] here is a plane wave propagating with frequency along the diagonal of the plane , so that a wave crest leaving travels a distance before arriving at .since the topology of stages i and ii imply periodicity in the direction , we set similarly , the frequency of the functions is set to in stage iii we use the same choices , while in the stage iv tests we set the amplitudes were chosen to be convergence runs used the plane wave solution . in stagesi - iii we used the grid sizes while in stage iv we used with the additional gridsize in the case of the the reduced harmonic ricci system with constrained spherical boundary .the time - step was set to . in the stageiv test of the reduced harmonic einstein system the widths of the boundary shells were chosen to be and .the same parameters were used when testing the algorithm for the reduced harmonic ricci system with unconstrained spherical boundary . the evolution - boundary algorithm for the reduced harmonic ricci system with constrained spherical boundarywas tested using the parameters , and .the code was used to evolve the solutions from to ( in stage iv ) , at which time convergence was tested by measuring the and the norms of for the einstein system and of for the ricci system , which test convergence of the hamiltonian constraint .the norms were evaluated in the entire evolution domain .in addition , we also checked convergence of the metric components to their analytic values .in addition to plane wave tests , we tested the qualitative performance in stage iv using an offset spherical wave based upon the superpotential ( with shifted origin ) where and .\ ] ] the parameters , and are set to evolution requires cauchy data at and boundary data at the guard points .the cauchy data was provided by giving } , \gamma^{ij[-1 ] } , \gamma^{tt[0]} ] at all interior and guard points .in addition , we provided boundary data at each time - step by giving } ] and } ] .we first tested the code without numerically imposing the hamiltonian constraint . in this case we provided boundary data at each time- step by giving } ] at all guard points .next we tested the code with the hamiltonian constraint numerically imposed at the boundary .thus we first prescribed the traceless } ] at all guard points , then computed at each time - step via the boundary constraint eq .( [ eq : boxphi ] ) . in all cases we found the numerically evolved metric functionsconverge to their analytic values to . in stagei , vanished to roundoff accuracy , while in stages ii - iii it vanished to second order accuracy .in particular , for the stage iii algorithm with constrained boundary , converged to zero as . in stageiv , with constrained boundary , we found that the norm of vanishes to first order accuracy .however , the norm decreases linearly with grid size only for and but fails to show further decrease for and . this anomalous behavior of the norm stems from the random way in which guard points are required at different sites near the boundary .this introduces an unavoidable nonsmoothness to the second order error in the metric components , which in turn leads to error in the second spatial derivatives occurring in or in the hamiltonian constraint . unlike the einstein system in which the constraints propagate tangent to the boundary , this error in the ricci system propagates along the light cone into the interiorhowever , since its origin is a thin boundary shell whose width is , the norm of remains convergent to first order .we expect that the convergence of the hamiltonian constraint for a spherical boundary would be improved by matching the interior solution on the cartesian grid to an exterior solution on a spherical grid aligned with the boundary , as is standard practice in treating irregular shaped boundaries .we also tested the code s ability to evolve an outgoing spherical wave traveling off center with respect to a spherical boundary of radius .[ fig:2dplots ] illustrates a simulation performed using the stage iv algorithm for the reduced harmonic ricci system , with the hamiltonian constraint numerically enforced at the boundary .the metric fields were evolved from to , using a grid of . after the analytic wave has propagated out of the computational domain ,the remnant error is two orders of magnitude smaller than the initial signal .this shows that artificial reflection off the boundary is well controlled even in the computationally challenging case of a piecewise cubic spherical boundary .we thank helmut friedrich for numerous discussions of the initial - boundary value problem .our numerical code was based upon an earlier collaboration with roberto gmez .this work has been partially supported by nsf grant phy 9988663 to the university of pittsburgh .computer time was provided by the pittsburgh supercomputing center and by npaci .in order to study the evolution of the system consisting of eq s .( [ eq : hwave ] ) , ( [ eq : gauge_t ] ) , and ( [ eq : gauge_i ] ) in the half - space with boundary at , we employ the auxiliary variables . in terms of the variables the system takes the form next we define the 34-dimensional vector by the system of equations ( [ eq : ev_system_first ] ) - ( [ eq : ev_system_last ] ) then has the form where is the identity matrix , and where the matrix has the eigenvalue , with multiplicity and eigenvectors the eigenvalue , with multiplicity and eigenvectors and the kernel of the matrix has dimension , with a basis in the eigen - basis defined by the vector defined in eq .( [ eq : udef1 ] ) takes the form with non - homogeneous boundary data can be given in terms of a free column vector field in the form where can be any matrix satisfying the three simplest matrices that satisfy the condition ( [ eq : hcond ] ) are and .the first of these corresponds to specifying neumann data for and dirichlet data for .using the zero matrix as a candidate for corresponds to giving sommerfeld data for and specifying the quantity .last , picking to be the identity matrix corresponds to giving dirichlet data for as well as for .note that the evolution system ( [ eq : ev_system_first ] ) - ( [ eq : ev_system_last ] ) accepts a much richer class of boundary conditions than the three we just mentioned .one simply needs to pick a matrix that satisfies eq .( [ eq : hcond ] ) and the choice of defines the seven free functions that are to be specified at the boundary .the curvature tensor , which provides gauge invariant fields , decomposes into the ricci curvature , which vanishes if the evolution and constraint equations are satisfied , and the weyl curvature . in order to analyze the boundary freedom ,it is convenient make the following choice of a complete , independent set of 10 linearized weyl tensor components : , , , , , and .we use the linearized vacuum bianchi identities \mu\nu}=0 $ ] and to show that the weyl data which can be freely specified on the boundary can be reduced to the 2 independent components .first , the identity implies ( after using the trace - free property of the weyl tensor ) which determines the boundary behavior of in terms of the remaining 9 weyl components .next , note that the identity implies or , taking a -derivative and using eq .( [ eq : trac ] ) , that this gives a propagation equation intrinsic to the boundary which determines the time dependence of in terms of the boundary data for .( note that propagates up the boundary with velocity in one mode and in a cone with velocity in the other mode . )next , the identity determines the time dependence of ; and the identity determines the time dependence of .this reduces the free weyl data on the boundary to the 4 independent components and .however , the specification of , in addition to , would lead to an inconsistent boundary value problem .this can be seen from the identity which determines neumann data for in terms of dirichlet data for and other known quantities .similarly , the identity determines neumann data for .thus , since the components of the weyl tensor satisfy the wave equation , the specification of both and as free dirichlet boundary data leads to an inconsistent boundary value problem .the determination of boundary boundary values for from boundary data for is a global problem which first requires solving the wave equation to determine from its boundary and initial data .then the time derivative of the trace - free part of eq .( [ eq : bianch ] ) yields which propagates up the boundary in terms of initial data . defining and , with ,this reduces to which has propagation velocity .( note that this is but one of the variations consistent with the maximally dissipative condition used by friedrich and nagy .in the case of unit lapse and vanishing shift , assigning boundary data for is equivalent to assigning data for the trace - free part of the intrinsic 2-metric of the boundary foliation , consistent with results found in ref .
we investigate the initial - boundary value problem for linearized gravitational theory in harmonic coordinates . rigorous techniques for hyperbolic systems are applied to establish well - posedness for various reductions of the system into a set of six wave equations . the results are used to formulate computational algorithms for cauchy evolution in a 3-dimensional bounded domain . numerical codes based upon these algorithms are shown to satisfy tests of robust stability for random constraint violating initial data and random boundary data ; and shown to give excellent performance for the evolution of typical physical data . the results are obtained for plane boundaries as well as piecewise cubic spherical boundaries cut out of a cartesian grid .
since the black - scholes models rely on stochastic differential equations , option pricing rapidly became an attractive topic for specialists in the theory of probability and stochastic methods were developed first for practical applications , along with analytical closed formulas .but soon , with the rapidly growing complexity of the financial products , other numerical solutions became attractive [ 1,2,6,12,15 - 19 ] .there is a large and ever - going number of different interest rate derivative products now , for instance bonds , bonds options , interest rate caps , swap options , etc .bonds in general carry coupons , but there also exists a special kind of bond without coupons which is called zero coupon bond ( zcb ) . a zcb is purchased today a certain price , while at maturity the bond is redeemed for a fixed price . by a similar way to the derivation of the black - sholes equation , the problem of zcb pricing can be reduced to a partial differential equation ( see [ 5,13 ] ) . the present paper deals with a degenerate parabolic equation of zero - coupon bond pricing [ 5,13 ] . since our equation ( see ( 1 ) , ( 2 ) , ( 3 ) ) in the next section becomes _ degenerate _ at the boundary of the domain , classical finite difference methods may fail to give accurate approximations near the boundary .an effective method that resolves the singularity is proposed by s. wang for the black - sholes equation .the method is based on a finite volume formulation of the problem coupled with a fitted local approximation to the solution and an implicit time - stepping technique .the local approximation is determined by a set of two - point boundary value problems defined on the element edges .this fitting technique is based on the idea proposed by allen and southwell [ 8,10 ] for convection - diffusion equations and has been extended to one and multidimensional problems by several authors [ 7,8,10 ] .this paper is organized as follows .our model problem is presented in section 2 , where we discuss our basic assumptions and some properties of the solution .the discretization method is developed in section 3 .section 4 is devoted to the time discretization .we show that the system matrix is a -matrix , so that the discretization is monotone . in this casethe maximum principle is satisfied and thus the discrete solution is non - negative .numerical experiments show higher accuracy of our scheme in comparison with other known scheme near the degeneracy .we observe and emphasize the fact that in the proposed method , we do not need to refine the mesh near the boundary ( degeneration ) .suppose that the short term _ interest rate , the spot rate , _ follows a random walk where is the brownian motion . since the _ spot rate _ , in practice , is never greater than a certain number , which is assumed , and never less than or equal to zero , we suppose that ] if we define and .let according to the assumptions ( [ 9 ] ) , , at the construction of the finite volume approximation several cases must be considered .* _ we consider equation _ ( [ 11 ] ) _ with coefficients _ ( [ 9 ] ) , . now( [ 11 ] ) takes the form \nonumber \\ + ( r+\theta'+\lambda(t)w'-(ww')')p=0.\end{aligned}\ ] ] integrating ( [ 12 ] ) over the interval we have {r_{i-1/2}}^{r_{i+1/2}}+q_{i}=0,\ ] ] for , where we denoted applying the mid - point qudrature rule to the first and the last terms in ( [ 13 ] ) we obtain +q_{i}^{h}p_{i}=0,\ ] ] for , _ denotes the nodal approximation to _ to be determined and is the flux associated with and denoted by the discussion is divided into _ three sub - cases_. * case 1.1 . *_ approximation of at for _ let us consider the following two - point boundary value problem for : [ 17 ] where .integrating yields the first order linear equation where denotes an additive constant ( depending on ) .the analytic solution of this linear equation is where is an additive constant .note that in this reasoning we assume that .but as will be seen below , the restriction can be lifted as it is limiting case of the above when . applying the boundary conditionwe obtain where .solving this linear system gives for .this gives a representation for the flux on the right - hand side of ( [ 18 ] ) .note that ( [ 21 ] ) also holds when .this is because since and thus , in ( [ 21 ] ) provides an approximation to the flux at .* case 1.2 . * _ approximation of at ._ now , we write the flux in the form note that the analysis in case 1.1 does not apply to approximation of the flux because is degenerate .this can be seen from expression ( [ 19 ] ) .when , we have to chose as , otherwise , blows up as .however , the resulting solution can never satisfy both of conditions in . to solve this difficulty , following [ 15 ], we will reconsider , with an extra degree of freedom in the following form : where is an unknown constant to be determined . integrating the differential equation once we have using the condition we have and so the above equation becomes solving this problem analytically gives where as defined case 1.1 and is an additive constant ( depending on ) . to determine the constant and , we first consider the case . when implies that . if , is arbitrary , so we also choose . using we obtain . when , from ( [ 24 ] ) we see that is satisfied for any and . therefore , solutions with such and are not unique .we choose , and and then .therefore , from ( [ 23 ] ) we have that \ ] ] for both and .furthermore , ( [ 24 ] ) reduces to .\ ] ] * case 1.3 . * _ approximation of at . we write the flux in the form the situation is symmetric to this of case 1.2 .we consider the auxiliary problem : where is an unknown constant to be determined .integrating the differential equation once we have using the condition we have , and so the last equation becomes solving this problem analytically gives where as defined before and is an additive constant ( dependent on ) . to determine the constants and , we first consider the case when . when implies .if is arbitrary , so we also choose . using in ( [ 28 ] )we obtain . when , from ( [ 28 ] ) we see that is satisfied for any and .we choose , and in ( [ 28 ] ) gives .therefore , from ( [ 27 ] ) we have .\ ] ] * case 2 . * _ now we consider equation _ ( [ 11 ] ) _ with coefficients _ ( [ 9 ] ) , .following the line in case 1 , we have {r-1/2}^{r_{i+1/2}}+q_{i}=0\ ] ] for , where * case 2.1 . * _ approximation of at for ._ applying the mid - point quadrature rule to the first and third terms in ( [ 30 ] ) we find + q_{i}^{h}=0\ ] ] for , where further , one can obtain a formula in the form ( [ 21 ] ) .* case 2.2 . * _ approximation of at _ now we proceed as in case 1.2 , but * case 2.3 * _ approximation of at . _ in this case * case 3 . *_ here we consider equation _ ( [ 11 ] ) _ with coefficients _ ( [ 9 ] ) , . in this casethe construction is symmetric to this in case 2 and we will only present the results . {r-1/2}^{r_{i+1/2}}+q_{i}= 0\ ] ] for , where * case 3.1 . * _ approximation of at for ._ now we take * case 3.2 . * _ approximation of at ._ in this subcase * case 3.3 . * _ approximation of at ._ now we proceed as in case 1.3 but * case 4 .* _ here we consider equation _( [ 11 ] ) _ with coefficients _ ( [ 9 ] ) , .we have _ { r-1/2}^{r_{i+1/2}}+q_{i}= 0\ ] ] for , where * case 4.1 . * _ approximation of at ._ now we choose * case 4.2 . *_ approximation of at ._ we take * case 4.3 . *_ approximation of at ._ we choose finally , using ( [ 21 ] ) , ( [ 25 ] ) , ( [ 27 ] ) and ( [ 29 ] ) , depending on the value of respectively , we define a global piecewise constant approximation to by satisfying for . substituting ( [ 21 ] ) or ( [ 25 ] ) or ( [ 27 ] ) or ( [ 29 ] ) , depending on the value of respectively , into ( [ 15 ] ) we obtain where for ; now we will derive the semi - discrete equations at and . we integrate the equation ( [ 12 ] ) over the interval to get using ( [ 25 ] ) we obtain + q_{0}^{h}p_{0}=0,\ ] ] where therefore , at we have : next , in a similar way ( now integrating ( [ 12 ] ) over and using ( [ 29 ] ) ) , we derive the semi - discrete equation at : +q_{n}^{h}p_{n}=0,\end{aligned}\ ] ] where therefore , at we have where now discuss the accuracy of the interest rate discretization of the system ( [ 32 ] ) , ( [ 33 ] ) , ( [ 34 ] ) .let be row vectors with dimension defined by obviously , introducing the vector and using , the equations ( [ 32 ] ) , ( [ 33 ] ) , ( [ 34 ] ) can be written as for .this is a first - order linear odes system . to estimate the accuracy of the interest rate discretization, we will follow .first , we define a space of functions associated with in the following way . on the interval we choose so that it satisfies with and .naturally , the solution to this two - point boundary value problem is given in ( [ 19 ] ) where and are determined by ( [ 20 ] ) with and .similarly we define on the interval so that and . combining these two solutions and extending the function as zero to the rest of the interval we have for in a similar way , on the intervals and we define the linear functions the following assertion is an analogue of lemma 4.2 in .let be a sufficiently smooth function and be the -interpolant of .then where and are the fluxes defined in ( [ 16 ] ) and ( [ 31 ] ) , respectively and is a positive constant independent of and . summarizing the constructions in all cases 1 - 4 and using lemma 1 , the following result has been established .the semidiscretization ( [ 35 ] ) is consistent with equation ( [ 7 ] ) and the truncation error is of order .to discretize the system ( [ 35 ] ) we introduce the time mesh : for each we put and .then , we apply the two - level time - stepping method with splitting parameter $ ] to ( [ 35 ] ) and yield for .this linear system can be rewritten as {\bf{p}}^{j}\ ] ] for , where is diagonal matrix . when , the time stepping scheme becomes crank - nicholson scheme and when it is the backward euler scheme . both of these schemes are unconditionally stable , and they are of second and first order accuracy . we now show that , when is sufficiently small , the system matrix of ( [ 36 ] ) is an -matrix . for any given ,if is sufficiently small , the system matrix of ( [ 36 ] ) is an -matrix .we will proceed as follows . using the definition of , will write down the scalar form of ( [ 36 ] ) : where let us first investigate the off - diagonal entries of the system matrix and . from the formulas for from the above we have , that is because for each and each .we have used that has just the sign of . from ( 22 ) we have that it is true also for .now it is clear that and are negative .we should also note that is always positive since is small .the situation is different for , , , , and , , , , . from the first three equations we find it is easily to see that when and then for small .therefore and . in a similar way one can eliminate and . as a resultwe obtain a system of linear algebraic equations with unknowns which matrix is a -matrix . while are non - negative , we have to prove if and are also non - negative . from the formulae for it follows that when is small is non - negative since and are of the same order with respect to . is being handled the same way as and also considered non - negative .since the load vector is non - negative and the corresponding matrix is an m - matrix we can conclude that are non - negative .finally , using the formulas for one can easily check that they are non - negative too if is small .theorem 3 shows that the fully discretized system ( [ 36 ] ) satisfies the discrete maximum principle and because of that fact the above discretization is monotone .this guarantees the following : for non - negative initial function the numerical solution , obtained via this method , is also non - negative as expected , because the price of the bond is a positive number , see lemma 1 .numerical experiments presented in this section illustrate the properties of the constructed schemes . in order to investigate numerically the convergence and the accuracy of the constructed schemes for , and we approximately solve the model problem with the known analytical solution ( exponentially decreasing with respect to the arguments ) .we choose this function because its feature is similar to that of the exact solution to the problem under consideration .we take and . the initial distribution we compute using this analytical solution .let us note , that when we use analytical solution , in the equation a right hand side arises . in the tables beloware presented the calculated , and mesh norms of the error by the formulas }}}.\ ] ] everywhere the calculations are performed with constant time step . for the first and the second examples the rate of convergence ( rc )is calculated using double mesh principle where is the mesh -norm , -norm or -norm , and are respectively the exact solution and the numerical solution computed at the mesh with subintervals ._ first example ._ for the first example coefficients in equation ( [ 7 ] ) are that correspond to case 1 . in table 1 below are presented the calculated , and mesh norms of the error .n&-norm&rc&-norm&rc&-norm&rc + 21&1.481 e-2&-&2.552 e-3&-&2.725 e-2&- + 41&7.607 e-3&0.96&9.415 e-4&1.44&1.978 e-2&0.46 + 81&3.855 e-3&0.98&3.402 e-4&1.47&1.418 e-2&0.48 + 161&1.941 e-3&0.99&1.216 e-4&1.48&1.010 e-2&0.49 + 321&9.738 e-4&1.00&4.324 e-5&1.49&7.169 e-3&0.49 + [ tab1 ] _ second example ._ for the second example coefficients in equation ( [ 7 ] ) are that correspond also to case 1 . in table 2 below are calculated the mesh , and norms of the error .n&-norm&rc&-norm&rc&-norm&rc + 21&1.003 e-2&-&1.482 e-3&-&1.541 e-2&- + 41&5.156 e-3&0.96&5.443 e-4&1.44&1.111 e-2&0.46 + 81&2.614 e-3&0.98&1.962 e-4&1.47&7.937 e-3&0.48 + 161&1.316 e-3&0.99&7.005 e-5&1.48&5.641 e-3&0.49 + 321&6.604 e-4&0.99&2.489 e-5&1.49&3.998 e-3&0.49 + [ tab2 ] it can be seen from table 1 and table 2 that the numerical results are similar ._ third example ._ for this example the coefficients in equation ( [ 7 ] ) are the following : that correspond to case 4 .let us note that this case is the most complicated of the four cases discussed in the article with respect to the deriving of the numerical scheme . in figure 1 we present the analytical and corresponding approximate solutions .one can see that the biggest error is _ near the ends of the interval , i. e. near to the points of the degeneration_. , numerical solution for , .,width=384,height=192 ] in table 3 are presented the calculated mesh , and norms of the error for this example .n&-norm&-norm&-norm + 21&2.253 e-2&3.498 e-3&4.078 e-2 + 41&8.382 e-3&1.771 e-3&3.561 e-2 + 81&4.920 e-3&8.342 e-4&2.728 e-2 + 161&2.732 e-3&3.735 e-4&1.965e-2 + [ tab1 ] for this example we used runge method for practical estimation of the _ rate of convergence _ of the considered schemes with respect to the space variable at fixed value of . in the casewhen the exact solution of the model problem is known the formula for is and in the case when the exact solution is not known the formula for is in both cases - on two inserted grids ( when use the exact solution of model problem ) and on three inserted grids ( without exact solution ) we get that the rate of convergence is about two , when the node is not very near to the points of degeneration . for the problem under consideration we constructed several difference schemes , well known for non - degenerate parabolic problems [ 11 ]then , the differential equation ( [ 7 ] ) was approximated , together with the boundary conditions ( [ 5 ] ) , ( [ 6 ] ) and initial condition ( [ 8 ] ) .with respect to the variable for approximation of the second derivative is used the usual three - point approximation , and for the first derivative - central difference .with respect to time a crank - nicolson scheme is constructed .further this scheme we will call scheme . the scheme we have constructed in this paper for the case 4 we will call scheme . from the table 4 one can see that the scheme a gives more accurate results near the ends of the interval , where the degeneration occurs ..comparison between scheme a and scheme b [ cols="^,^,^,^",options="header " , ] [ tab3 ]we have studied a degenerate parabolic equation in the zero - coupon bond pricing .we constructed and discussed a finite volume difference scheme for the problem .we have shown that the numerical scheme results a monotone numerical scheme .the numerical experiments demonstrate the efficiency of our scheme near degeneration .t. chernogorova , r. valkov , a computational scheme for a problem in the zero - coupon bond pricing , amer .inst . of phys .1301 , pp .370 - 378 , 2nd international conference application of mathematics in technical and natural sciences , ed .m. d. todorov and c. i. christov , sozopol , bulgaria , june 21 - 26 , 2010 .
in this paper we solve numerically a _ degenerate _ parabolic equation with _ dynamical _ boundary conditions of zero - coupon bond pricing . first , we discuss some properties of the differential equation . then , starting from the divergent form of the equation we implement the finite - volume method of s. wang to discretize the differential problem . we show that the system matrix of the discretization scheme is a -matrix , so that the discretization is _ monotone_. this provides the non - negativity of the price with respect to time if the initial distribution is nonnegative . numerical experiments demonstrate the efficiency of our difference scheme near the ends of the interval where the degeneration occurs . degenerate parabolic equation , zero - coupon pricing , finite volume , difference scheme , m - matrix
in recent years studying the structure , function and evolution of complex networks in society and nature has become a major research focus . examples of complex networks include the internet , the world wide web , the international aviation network , social collaborations between a group of people , protein interactions in a cell , to name just a few .these networks exhibit a number of interesting properties , such as short average distance between a pair of nodes in comparison with large network size , the clustering structure where one s friends are friends of each other , and the power law distribution of the number of connections a node has .this paper concerns one particular type of complex networks , the document networks , such as the web and the citation networks .document networks are characteristic in that a document node , e.g. a webpage or an article , carries text or multimedia content . properties of document networks are not only affected by topological connectivity between nodes , but also strongly influenced by semantic relation between the content of nodes. research on document networks is relevant to a number of issues , such as the web navigation and information retrieval .menczer reported that the probability of linkage between two documents increases with the similarity between their content .based on this observation , he proposed the degree - similarity mixture ( dsm ) model , which successfully reproduces two important properties of document networks : the power - law connectivity distribution and the increasing linkage probability as a function of content similarity .the dsm model remains one of the most advanced models for document networks .recently we reported that document networks exhibit a number of triangular clustering properties , for example they have huge numbers of triangles and high clustering coefficients , and there is a positive relation between the probability of formation of a triangle and the content similarity among the three documents involved .menczer s dsm model focuses on the connectivity and content properties between two nodes , and it produces only around 5% of triangles in real document networks .there are a number of topology models which can produce networks with a power - law distribution of connectivity with high clustering coefficient , such as a network model in which is based on the balance between different types of attachment mechanisms , i.e. cyclic closure and focal closure .this model , however , do not has the ingredient of document content in its generative mechanisms and can not reproduce content - related properties of document networks . in this paper , we examine and model the triangular clustering properties of document networks . in section [ sec:2 ] ,we firstly introduce two datasets of real document networks , we then define a number of metrics to quantify connectivity and content properties , and finally we review menczer s dsm model . in section [ sec:3 ]we propose our degree - similarity product ( dsp ) model , where a node s ability of acquiring a new link is given as a _ product _function of node connectivity and content similarity between nodes . in section [ sec:4 ]we evaluate our dsp model against the real data and show that the model reproduces not only the connectivity and content properties between two nodes , but also the triangular clustering properties involving three nodes . in section [ sec:5 ]we conclude the paper .|ccc properties & wt10 g & dsm model & dsp model + & 50 000 & 50 000 & 50 000 + & 233 692 & 233 692 & 234 020 + & 1266 730 & 62 503 & 1233 308 + & 0.153 & 0.062 & 0.121 + properties & pnas & dsm model & dsp model + & 28 828 & 28 828 & 28 828 + & 40 610 & 40 610 & 40 580 + & 13 544 & 868 & 13 583 + & 0.214 & 0.021 & 0.139 + in this study we examine the following two datasets of real document networks . * ` wt10 g ` data , which is a webpage network where a webpage is a node and two webpages are connected if there is a hyperlink between them .the wt10 g data are proposed by the annual international text retrieval conference ( http://trec.nist.gov ) and distributed by csiro ( http://es.csiro.au/trecweb ) .the data preserve properties of the web and have been widely used in research on information modelling and retrieval .the data contain million webpages , hyperlinks among them and the text content on each webpage .we study ten randomly sampled subsets of the wt10 g data .each subset contains webpages with the url domain name of _.com_. ( a recent study has shown that subsets sampled from different or mixed domains exhibit similar properties . ) observations in this paper are averaged over the ten subsets .* ` pnas ` data , which is a citation network where an article is a node and two article are linked if they have a citation relation .it contains articles published by the proceedings of the national academy of sciences ( pnas ) of the united states of america from to .we crawled the data at the journal s website ( http://www.pnas.org ) in may 2008 and used each article s title and abstract as its content .triangle is the basic unit for clustering structure and network redundancy .triangle - related properties have been used to quantify network transitivity and characterise the structural invariance across web sites .the most widely studied triangle - related property is the clustering coefficient , , which measures how tightly a node s neighbours are interconnected with each other .clustering coefficient is calculated as the ratio of the number of triangles formed by a node and its neighbours to the maximal number of triangles they can have .when a node and its neighbours are fully interconnected and form a clique ; and when the neighbours do not know each other at all .the average clustering coefficient over all nodes measures the level of clustering behaviour in a network .note that triangle and clustering coefficient are not trivially related .as shown in table [ tab : modelevaluation ] , the total number of triangles , , in the wt10 g data is almost 100 times of that in the pnas data .the density of triangles in the wt10 g data , measured by or , is also many times larger .however the average clustering coefficient , , of the wt10 g data is smaller than that of the pnas data .linkage probability and triangularity probability for the wt10 g webpage network and the pnas citation network .the results are compared with menczer s dsm model .( a ) linkage probability as a function of content similarity .( b ) triangularity probability , in logarithmic scale , as a function of trilateral similarity .,title="fig:",width=302 ] linkage probability and triangularity probability for the wt10 g webpage network and the pnas citation network .the results are compared with menczer s dsm model .( a ) linkage probability as a function of content similarity .( b ) triangularity probability , in logarithmic scale , as a function of trilateral similarity .,title="fig:",width=302 ] for a given document network , we collect keywords present in all documents in the network and construct a keyword vector space .the content of a document is then represented as a keyword vector , , which gives the frequency of each keyword s appearance in the document .the content similarity , or relevance , , between two documents , and , is quantified by the cosine of their vectors : when the content of the two documents are highly related or similar ; when the two documents have very little in common .the linkage probability , , is the probability that two nodes with content similarity are connected in the network .it is calculated as , where is the total number of node pairs ( connected or not ) whose content similarity is , and is the number of such node pairs which are actually connected in the network . figure [ fig : dsm](a ) shows that in document networks the linkage probability increases with the content similarity , i.e. the more similar the more likely two documents are connected .for example in the pnas citation network , if two articles have there is a 50% chance that they have a citation relation , by comparison the chance is very low when . in document networks , if a node is similar to a second node and this second node is similar to a third node , then the first and third nodes are also similar . herewe define a new metric called the _ trilateral similarity _ , , which measures the minimum content similarly among three nodes . for three document nodes , and , the trilateral similarity is the smallest ( bilateral ) content similarity between each pair of the three nodes , i.e. similarly we define the triangularity probability , , as the probability that three nodes with the trilateral similarity form a triangle . in this studywe consider weak triangles , each of which is a circle of three nodes with at least one link ( at any direction ) between each pair of the three nodes .figure [ fig : dsm](b ) shows that the triangularity probability is sensitive to the trilateral similarity .when the trilateral similarity increases from to , the triangularity probability increases two orders of magnitude for the wt10 g data and four orders of magnitude for the pnas data , respectively .we note that for a given value of content similarity or trilateral similarity , the cube of the ( bilateral ) linkage probability provides the lower bound of the triangularity probability .but these two quantities are not trivially related because the later is strongly determined by a network s triangular clustering structure .the degree - similarity mixture ( dsm ) model was introduced by menczer in 2004 .the model s generative mechanism incorporates content similarity in the formation of document links . at each step ,one new document is added and attached by new links to existing documents . at timestep , the probability that the new document is attached to the existing document is where ; is the number of connections , or degree , of node ; is calculated from document content of the given network ; is a constant which is calculated based on real data ; and is a preferential attachment parameter .the first term of equation ( [ eq : dsm ] ) favours an old node which is already well connected and the second term favours one whose content is similar to the new node .the tunable parameter models the balance between choosing a popular node with large degree or choosing a similar node with high content similarity .|cc dsm model parameters & wt10 g & pnas + & 0.1 & 0.01 + & 3.5 & 3.5 + dsp model parameters & wt10 g & pnas + & 5 & 7 + & 1 & 4 + & & + & 6 & 8 + for each of the two document networks under study , we use the dsm model to grow ten networks to the same size of the real network and results are averaged over the ten networks ( see table [ tab : modelevaluation ] ) . table [ tab : parameters ] gives the model parameters which are obtained , as menczer did , by best fitting .menczer has shown that the dsm model is able to reproduce the degree distribution of document networks .figure [ fig : dsm](a ) shows the dsm model also produces a sound prediction on the relation between linkage probability and content similarity . in terms of triangular clustering properties ,table [ tab : modelevaluation ] shows that the model , however , produces only around of the total number of triangles contained in the real networks and underestimates the average clustering coefficient of the networks .figure [ fig : dsm](b ) shows the model also significantly underestimates the correlation between triangularity probability and trilateral similarity .in this paper we introduce a new generative model for document networks , we call it the degree - similarity product ( dsp ) model . our model is partially inspired by the multi - component graph growing models of .the model starts from an initial seed of a pair of linked nodes . at each time step ,one of the following two actions is taken : * growth : with probability , a new isolated node is introduced to the network .parameter is a constant , which is given by the numbers of nodes and links of the generated network , , and determines the average node degree of the generated network , i.e. .* dsp preferential attachment : with probability , a new link is attached between two nodes .the link starts from node and ends at node .the two nodes are chosen by the following preferential probabilities : } , \label{eq : target node}\ ] ] where is the out - degree of node , is the in - degree of node , and run over all existing nodes , .the content similarity is calculated from document content of the given network .parameters , , and all take positive values . and give nodes with or , respectively , an initial ability of acquiring links . allows that even very different documents ( with ) still have a chance to link with each other . tunes the weight of the content similarity in choosing a link s ending node .it is notable that equation [ eq : target node ] is a product function of degree and content similarity .this ensures that links are preferentially attached between nodes which are _ both _ popular _ and _ similar . as shown in the following section , this mechanism effectively increases the chance of forming triangles among similar nodes .for each of the two document networks , we generate ten networks using the dsp model with different random seeds .we avoid creating self - loops and duplicate links .the ten networks are grown to the same size as the target network .results are then averaged over the ten networks .as shown in table [ tab : modelevaluation ] , the dsp model well reproduces the number of triangles and the average clustering coefficient of the two document networks .figure [ fig : dsp - twonodes ] and figure [ fig : dsp - threenodes ] show that the model also closely resembles the two networks distribution of node in - degree , linkage probability as a function of content similarity , clustering coefficient as a function of node degree , and triangularity probability as a function of trilateral similarity . the average clustering coefficient of nodes with in - degree ( see figure [ fig : dsp - threenodes](a ) and ( b ) ) gives details of a network s triangular clustering structure .table [ tab : parameters ] gives the parameters used in the modelling .the value of the parameters are tuned for best fitting .our simulation shows that for both the real networks , the best modelling result is obtained when ( in equation [ eq : orignal node ] ) and ( in equation [ eq : target node ] ) take different values .this suggests that node out - degree and in - degree have different weights in choosing the starting and ending nodes of a link .the values of and for modelling the wt10 g data are smaller than those for the pnas data .this suggests that a poorly linked webpage has less difficulty in acquiring a new link in comparison with a poorly cited article. a larger value of is used for the pnas data .this indicates that content similarity plays a relatively stronger role than node connectivity in the growth of the citation network .it is known that document networks show a power - law degree distribution and a positive relation between the linkage probability and content similarity . in this paper , we show that document networks also contain very large numbers of triangles , high values of clustering coefficient , and a strong correlation between the triangularity probability and trilateral similarity .these three properties are not captured by the previous dsm model where a new node tends to link with an old node which is either popular or similar . our intuition is that a link tends to attach between two documents which are both popular and similar .we propose the degree - similarity product ( dsp ) model which resembles this behaviour by using the preferential attachment based on a product function of node connectivity and content similarity .our model reproduces all the above topological and content properties with remarkable accuracy .our work provides a new insight into the structure and evolution of document networks and has the potential to facilitate the research on new applications and algorithms on document networks .future work will mathematically analyse the dsp model , examine different types of triangles in document networks , and investigate the possible relation between the triangular clustering and the formation of communities in document networks .this work is supported by the national key basic research program of china under grant no.2004cb318109 and the national natural science foundation of china under grant number 60873245 .s.zhou is supported by the royal academy of engineering and the uk engineering and physical sciences research council ( epsrc ) under grant no.10216/70 .22 schank t and wagner d 2005 _ journal of graph algorithms and applications _ * 9 * 265 serrano m a and bogu m 2005 _ phys . rev ._ e * 74 * 056114 fagiolo g 2007 _ phys ._ e * 76 * 026107 arenas a , fernndez a , fortunato s and gmez s 2008 _ j. phys . a : math .theor . _ * 41 * 224001 zhou shi , cox i j and petricek v 2007 _ proceedings of the 9th ieee international symposium on web site evolution _
document networks are characteristic in that a document node , e.g. a webpage or an article , carries meaningful content . properties of document networks are not only affected by topological connectivity between nodes , but also strongly influenced by the semantic relation between content of the nodes . we observe that document networks have a large number of triangles and a high value of clustering coefficient . and there is a strong correlation between the probability of formation of a triangle and the content similarity among the three nodes involved . we propose the degree - similarity product ( dsp ) model which well reproduces these properties . the model achieves this by using a preferential attachment mechanism which favours the linkage between nodes that are both popular and similar . this work is a step forward towards a better understanding of the structure and evolution of document networks .
recently , there has been a lot of interest to integrate energy harvesting technologies into communication networks .several studies have considered conventional renewable energy resources , such as solar , wind etc , and have investigated optimal resource allocation techniques for different objective functions and topologies .however , the intermittent and unpredictable nature of these energy sources makes energy harvesting critical for applications where quality - of - service ( qos ) is of paramount importance , and most conventional harvesting technologies are only applicable in certain environments .an energy harvesting technology that overcomes the above limitations , is wireless power transfer ( wpt ) , where the nodes charge their batteries from electromagnetic radiation . in wpt, green energy can be harvested either from ambient signals opportunistically , or from a dedicated source in a fully - controlled manner ; in the latter case , green energy transfer can take place from more powerful nodes ( e.g. base stations ) that exploit conventional forms of renewable energy .initial efforts on wpt have focused on long - distance and high - power applications . however , both the low efficiency of the transmission process and health concerns for such high - power applications prevented their further development .therefore , most recent wpt research has focused on near - field energy transmission through inductive coupling ( e.g. , used for charging cell - phones , medical implants , and electrical vehicles ) .in addition , recent advances in silicon technology have significantly reduced the energy demand of simple wireless devices .wpt is an innovative technology and attracts the interest from both the academia and the industry ; some commercial wpt products already exist e.g. and several experimental results for different wpt scenarios are reported in the literature . with sensors and wireless transceivers getting ever smaller and more energy efficient , we envision that radio waves will not only become a major source of energy for operating these devices , but their information and energy transmission aspects will also be unified. simultaneous wireless information and power transfer ( swipt ) can result in significant gains in terms of spectral efficiency , time delay , energy consumption , and interference management by superposing information and power transfer .for example , wireless implants can be charged and calibrated concurrently with the same signal and wireless sensor nodes can be charged with the control signals they receive from the access point . in the era of internet of things ,swipt technologies can be of fundamental importance for energy supply to and information exchange with numerous ultra - low power sensors , that support heterogeneous sensing applications .also , future cellular systems with small cells , massive multiple - input multiple - output ( mimo ) and millimeter - wave technologies will overcome current path - loss effects ; in this case , swipt could be integrated as an efficient way to jointly support high throughputs and energy sustainability . in this paper , we give an overview of the swipt technology and discuss recent advances and future research challenges .more specifically , we explain the rectenna ( rectifying antenna ) circuit which converts microwave energy into direct current ( dc ) electricity and is an essential block for the implementation of the wpt / swipt technology . due to practical limitations, swipt requires the splitting of the received signal in two orthogonal parts .recent swipt techniques that separate the received signal in the domains of time , power , antenna , and space are presented . on the other hand, swipt entails fundamental modifications for the operation of a communication system and motivates new applications and services .from this perspective , we discuss the impact of swipt on the radio resource allocation problem as well as sophisticated cognitive radio ( cr ) scenarios which enable information and energy cooperation between primary and secondary networks .exchanging electromagnetic power wirelessly can be classified into three distinct cases : a ) near field power transfer employing inductive , capacitive or resonant coupling that can transfer power in the range of tenths of watts , over short distances of up to one meter ( sub - wavelength ) .b ) far field directive power beaming , requiring directive antennas , that can transfer power in the range of several mwatts at distances of up to several meters in indoor and outdoor environments .c ) far field , low - power , ambient rf power scavenging involving receivers that opportunistically scavenge the power transmitted from public random transmitters ( cell phone base stations , tv broadcasting stations ) for their communication with their peer nodes .for this last case the collected power is in the range of several , and the communication range can be up to several km assuming there is adequate power density . while there are several applications related to near field wireless charging , such as wireless charging of electric cars , cell phones or other hand - held devices , the main focus of this paper will be on far field wpt which involves the use of antennas communicating in the far field .a wireless power scavenger or receiver consists of the following components : a receiver antenna or antenna array , a matching network , a radio frequency to direct current ( rf - dc ) converter or rectifier , a power management unit ( pmu ) and the energy storage unit . upon the successful charging of the energy storage unit ,the storage unit , usually a rechargeable battery or a super capacitor , will provide power to the central processing unit ( cpu ) , the sensors and the low duty cycle communication transceiver .the schematic of this module is presented in fig .[ rectenna1 ] and a successful implementation of a wpt system that scavenges ambient power km away from tokyo tv tower is shown in fig .[ rectenna2 ] . based on friis free space equation the received rf power at the terminals of the antenna depends on the available power density and the antennas effective area and is given by : km away . ] where and are the transmitted and received power , respectively , and are the transmitter and receiver gains ( functions of the spatial variables ) respectively , denotes the wavelength , and is the polarization loss factor which accounts for the misalignment ( angle ) of the received electric intensity vector and the receiver antenna linear polarization vector . fromwe can deduce that in order to ensure maximum received power , the receiver antenna needs to have high gain , it has to be directed towards the transmitter ( maximum directivity direction ) , and it has to be aligned with the received -field ( ). however , these conditions can not be ensured in practice .for example , in a rayleigh multipath propagation environment the received signal has random polarization .consequently the optimum polarization for a receiver antenna is dual , linear , orthogonal polarization because it ensures the reception of the maximum average power regardless the received signal s polarization .if the maximum gain direction can not be guaranteed , omni - directional antennas are preferred instead .friis equation is frequency dependent and is applicable to narrowband signals .the total received power is calculated by integrating the received power over frequency , therefore , a broadband antenna will receive more power than a narrowband one . as a result ,wideband antennas or multi - band antennas are preferred .the rf - to - dc converter or rectifier is probably the most critical component of a wpt module and its design is the most challenging task .a rectifier consists of at least one non - linear device .most rectennas ( antenna and rectifier co - design ) reported in literature consist of only one diode . ideally , the conversion efficiency of a rectifying circuit with a single non - linear device , can reach up to % .unfortunately , this can only happen for specific values of and , where denotes the level of the input rf power at the rectifier , and is the delivered load . in more detail ,the rectenna structure consists of a single shunt full - wave rectifying circuit with one diode , a distributed line , and a capacitor to reduce the loss in the diode . depending on the requirements ,more complicated and sophisticated rectifier topologies can be used which are based on the well known dickson charge pump that can provide both rectification and impedance transformation .typically , schottky diodes are used as the non - linear devices because they have low forward voltage drop and allow very fast switching action , features useful for rectifiers .low forward voltage drop is needed because the received power is rather small , and fast switching action is needed to follow the relatively high rf frequency of the received signal .alternatively , it is possible to use cmos transistors or other transistors as the non - linear rectifying elements especially when integrated solutions are preferred .the major problem with rf - to - dc converters is that their efficiency , defined as depends on , , and the dc voltage , , across the load . generally the higher the incident rf power the higher the efficiency . for low power levels, efficiency can even drop to zero because the diodes forward voltage drop is too high .this is why the reported high efficiencies can not be seen in actual rf scavenging scenarios . as an example , the ambient power density measured km far from the tokyo tv tower was approximately / and the received power was about whereas high efficiency rectifiers require input powers between mw , ten to a hundred times higher . as a result , the measured efficiency was rather small .the final stage of the wpt module is the power management unit ( pmu ) that is responsible for maintaining the optimum load at the terminals of the rectifier despite the changing received rf power levels , and at the same time ensures the charging of the energy storage unit without additional loss .early information theoretical studies on swipt have assumed that the same signal can convey both energy and information without losses , revealing a fundamental trade - off between information and power transfer .however , this simultaneous transfer is not possible in practice , as the energy harvesting operation performed in the rf domain destroys the information content . to practically achieve swipt, the received signal has to be split in two distinct parts , one for energy harvesting and one for information decoding . in the following ,the techniques that have been proposed to achieve this signal splitting in different domains ( time , power , antenna , space ) are discussed .denotes the ps factor . ]if ts is employed , the receiver switches in time between information decoding and energy harvesting . in this case, the signal splitting is performed in the time domain and thus the entire signal received in one time slot is used either for information decoding or power transfer ( fig . [model1]a ) .the ts technique allows for a simple hardware implementation at the receiver but requires accurate time synchronization and information / energy scheduling .the ps technique achieves swipt by splitting the received signal in two streams of different power levels using a ps component ; one signal stream is sent to the rectenna circuit for energy harvesting and the other is converted to baseband for information decoding ( fig . [model1]b ) .the ps technique entails a higher receiver complexity compared to ts and requires the optimization of the ps factor ; however , it achieves instantaneous swipt , as the signal received in one time slot is used for both information decoding and power transfer .therefore , it is more suitable for applications with critical information / energy or delay constraints and closer to the information theoretical optimum .typically , antenna arrays are used to generate dc power for reliable device operation .inspired by this approach , the as technique dynamically switches each antenna element between decoding / rectifying to achieve swipt in the antenna domain ( fig . [ model1]c ) . in the as scheme, the receiving antennas are divided into two groups where one group is used for information decoding and the other group for energy harvesting .the as technique requires the solution of an optimization problem in each communication frame in order to decide the optimal assignment of the antenna elements for information decoding and energy harvesting . for a mimo decode - and - forward ( df ) relay channel , where the relay node uses the harvested energy in order to retransmit the received signal ,the optimization problem was formulated as a knapsack problem and solved using dynamic programming in . because optimal as suffers from high complexity , low - complexity asmechanisms have been devised which use the principles of generalized selection combining ( gsc ) .the key idea of gsc - as is to use the out of antennas with the strongest channel paths for either energy ( gsce technique ) or information ( gsci technique ) and the rest for the other operation .the ss technique can be applied in mimo configurations and achieves swipt in the spatial domain by exploiting the multiple degrees of freedom ( dof ) of the interference channel .based on the singular value decomposition ( svd ) of the mimo channel , the communication link is transformed into parallel eigenchannels that can convey either information or energy ( fig .[ model1]d ) . at the output of each eigenchannelthere is a switch that drives the channel output either to the conventional decoding circuit or to the rectification circuit .eigenchannel assignment and power allocation in different eigenchannels is a difficult nonlinear combinatorial optimization problem ; in an optimal polynomial complexity algorithm has been proposed for the special case of unlimited maximum power per eigenchannel ._ numerical example : _ the performance of the discussed swipt techniques is illustrated for the mimo relay channel introduced in section [ sec : as ] assuming a normalized block fading rayleigh . in the considered set - up , a single - antenna source communicates with a single - antenna destination through a battery - free mimo relay node , which uses the harvested energy in order to power the relaying transmission .we assume that the source transmits with power and spectral efficiency bits per channel use ( bpcu ) ; the relay node has global channel knowledge , which enables beamforming for the relaying link .an outage event occurs when the destination is not able to decode the transmitted signal and the performance metric is the outage probability .the first observation is that gsci outperforms gsce scheme for and , respectively .this result shows that diversity gain becomes more important than energy harvesting due to the high rf - to - dc efficiency .in addition , the gsci scheme with is the optimal gsc - based strategy and achieves a diversity gain equal to two .it can be also seen that the ps scheme outperforms the as scheme with a gain of db for high , while the ts scheme provides a poor performance due to the required time division . for gsci , gsce , ps , as and ts ; the simulation setup is bpcu , antennas , and rf - to - dc efficiency . ]this section discusses the benefits of employing swipt on resource allocation applications .utility - based resource allocation algorithm design has been heavily studied in the literature for optimizing the utilization of limited resources in the physical layer such as energy , bandwidth , time , and space in multiuser systems .in addition to the conventional qos requirements such as throughput , reliability , energy efficiency , fairness , and delay , the efficient transfer of energy plays an important role as a new qos requirement for swipt .resource allocation algorithm design for swipt systems includes the following aspects : mhz and the information receiver and energy harvesting receivers are located at meters and meters from the transmitter , respectively .the total transmit power , noise power , transceiver antenna gain , and rf - to - dc conversion loss are set to watt , dbm , dbi , and db , respectively . ] * joint power control and user scheduling the rf signal acts as a dual purpose carrier for conveying information and energy to the receivers simultaneously .however , the wide dynamic range of the power sensitivity for energy harvesting ( dbm ) and information decoding ( dbm ) is an obstacle for realizing swipt . as a result, joint power control and user scheduling is a key aspect for facilitating swipt in practice .for instance , idle users experiencing high channel gains can be scheduled for power transfer to extend the life time of the communication network . besides, opportunistic power control can be used to exploit the channel fading for improved energy and information transfer efficiency . fig .[ fig : cap_eh ] depicts an example of power control in swipt systems .we show the average system capacity versus the average total harvested energy in a downlink system . in particular , a transmitter equipped with antennas is serving one single - antenna information receiver and single - antenna energy harvesting receivers .as can be observed , with optimal power control , the trade - off region of the system capacity and the harvested energy increases significantly with . besides , the average harvested energy improves with the number of energy harvesting receivers . * energy and information scheduling for passive receivers such as small sensor nodes ,uplink data transmission is only possible after the receivers have harvested a sufficient amount of energy from the rf in the downlink .the physical constraint on the energy usage motivates a harvest - then - transmit " design .allocating more time for energy harvesting in the downlink leads to a higher amount of harvested energy which can then be used in the uplink .yet , this also implies that there is less time for uplink transmission which may result in a lower transmission data rate .thus , by varying the amounts of time allocated for energy harvesting and information transmission , the system throughput can be optimized .* interference management in traditional communication networks , co - channel interference is recognized as one of the major factors that limits the system performance and is suppressed or avoided via resource allocation . however , in swipt systems , the receivers may embrace strong interference since it can act as a vital source of energy .in fact , injecting artificial interference into the communication network may be beneficial for the overall system performance , especially when the receivers do not have enough energy for supporting their normal operations , since in this case , information decoding becomes less important compared to energy harvesting . besides , by exploiting interference alignment and/or interference coordination , a wireless charging zone " can be created by concentrating and gathering multicell interference in certain locations .swipt also opens up new opportunities for cooperative communications .we present one example where swipt improves the traditional system design of cooperative cr networks ( ccrns ) .ccrns are a new paradigm for improving the spectrum sharing by having the primary and secondary systems actively seek opportunities to cooperate with each other . the secondary transmitter ( st ) helps in relaying the traffic of the primary transmitter ( pt ) to the primary user ( pu ) , and in return can utilize the primary spectrum to serve its own secondary user ( su ) .however , to enable this cooperation , the st should both possess a good channel link to the primary system and have sufficient transmit power .while the former can be achieved by proper placement , the latter requirement can not be easily met especially when the st is a low - power relay node rather than a powerful base station ( bs ) , which renders this cooperation not meaningful .swipt could provide a promising solution to address this challenge by encouraging the cooperation between the primary and secondary systems at both the information and the energy levels , i.e. , the pt will transmit both information and energy to the st , in exchange , the low - power st relays the primary information .compared to the traditional ccrn , this approach creates more incentives for both systems to cooperate and therefore improves the system overall spectrum efficiency without relying on external energy sources .we illustrate the performance gain by studying a joint information and energy cooperation scheme using the amplify - and - forward protocol and the power splitting technique .two channel phases are required to complete the communication . in phasei , the pt broadcasts its data and both the st and the pu listen .the st then splits the received rf signal into two parts : one for information processing then forwarding to the pu and the other for harvesting energy , with relative power ratio of and , respectively . in phase ii , the st superimposes the processed primary data with its own precoded data , then transmits it to both the pu and the su .the st jointly optimizes power allocation factor and the precoding vectors to the pu and su to achieve the maximum rates . in fig .[ fig : rate : region ] , we show the achievable rate region of the proposed information and energy cooperation schemes and compare it with the conventional information cooperation only scheme .we consider a scenario where the distances from the st to all the other terminals are m , while the distance from the pt to the pu is m , therefore assistance from the st is usually preferred by the pt .we assume that the st has transmit antennas and all other terminals have a single antenna .the primary energy is set to db while the available secondary energy is db .path loss exponent is and the factor for the rician channel model is set to db .the rf - to - dc efficiency is equal to and .it is seen that the achievable rate regions are greatly enlarged thanks to the extra energy cooperation even with rf - to - dc efficiencies as low as .when the required pu rate is bps / hz , the su can double or triple its rate compared to the case without energy cooperation as varies from to . when the su rate is bps / hz , the pu enjoys 75% higher data rate when .the proposed additional energy cooperation clearly introduces a substantial performance gain over the existing information cooperation only cr scheme , and could be a promising solution for the future ccrns .this survey paper provides an overview of the swipt technology .different swipt techniques that split the received signal in orthogonal components have been discussed .we have shown that swipt introduces fundamental changes in the resource allocation problem and influences basic operations such as scheduling , power control , and interference management .finally , a sophisticated cr network that enables information / energy cooperation between primary and secondary systems has been discussed as an example of new swipt applications .swipt imposes many interesting and challenging new research problems and will be a key technology for the next - generation communication systems . in the following ,we discuss some of the research challenges and potential solutions : * path loss : the efficiency of swipt is expected to be unsatisfactory for long distance transmission unless advanced resource allocation and antenna technology can be combined .two possible approaches to overcome this problem include the use of massive mimo and coordinate multipoint systems .the former increase the dof offered to harvest energy and create highly directive energy / information beams steered towards the receivers .the later provides spatial diversity for combating path loss by reducing the distance between transmitters and receivers . besides , the distributed transmitters may be equipped with traditional energy harvesters ( such as solar panels ) and exchange their harvested energy over a power grid so as to overcome potential energy harvesting imbalances in the network .* communication and energy security : transmitters can increase the energy of the information carrying signal to facilitate energy harvesting at the receivers . however , this may also increase their susceptibility to eavesdropping due to the broadcast nature of wireless channels . on the other hand , receiversrequiring power transfer may take advantage of the transmitter by falsifying their reported channel state information .therefore , new qos concerns on communication and energy security naturally arise in swipt systems . * hardware development : despite the wealth of theoretical techniques for swipt , so far , hardware implementations have mostly been limited to wpt systems that opportunistically harvest ambient energy .thus , the development of swipt circuits is fundamental to investigate the tradeoff between swipt techniques , occurring due to inefficiencies of different circuit modules .for example , the ts technique is theoretically less efficient than ps , but the later suffers from power splitting losses that are not accounted for in theoretical studies . * applications : swipt technology has promising applications in several areas that can benefit from ultra - low power sensing devices .potential applications include structure monitoring by embedding sensors in buildings , bridges , roads , etc . , healthcare monitoring using implantable bio - medical sensors and building automation through smart sensors that monitor and control different building processes .however , for the successful realization of such swipt applications , several challenges have to be overcome at various layers from hardware implementation over protocol development to architectural design .this work was partially supported by the research promotion foundation , cyprus under the project koyltoyra / bp - ne/0613/04 `` full - duplex radio : modeling , analysis and design ( fd - rd ) '' .r. j. vyas , b. cook , y. kawahara , and m. m. tentzeris , `` e - wehp : a batteryless embedded sensor platform wirelessly powered from ambient digital - tv signal , '' _ ieee trans .microwave th ._ , vol.61 , pp.24912505 , june 2013 .p. nintanavongsa , u. muncuk , d. r. lewis , and k. r. chowdhury , `` design optimization and implementation for rf energy harvesting circuits , '' _ieee j. emerg .topics circ .2 , pp . 2433 , march 2012 .i. krikidis , s. sasaki , s. timotheou , and z. ding , `` a low complexity antenna switching for joint wireless information and energy transfer in mimo relay channels , '' _ ieee trans ._ , vol.62 , no.5 , pp.15771587 , may 2014 . d. w. k. ng , e. s. lo , and r. schober , `` wireless information and power transfer : energy efficiency optimization in ofdma systems , '' _ ieee trans .wireless commun ._ , vol . 12 , no . 12 , pp63526370 , dec . 2013 .d. w. k. ng , e. s. lo , and r. schober , `` robust beamforming for secure communication in systems with wireless information and power transfer , '' _ ieee trans .wireless commun ._ , vol . 13 , pp .45994615 , aug .o. simeone , i. stanojev , s. savazzi , y. bar - ness , u. spagnolini , and r. pickholtz , `` spectrum leasing to cooperating secondary ad hoc networks , '' _ ieee j. sel .areas commun .1 , pp . 203213 , jan . 2008 .g. zheng , s.h .song , k. k. wong , and b. ottersten , `` cooperative cognitive networks : optimal , distributed and low - complexity algorithms '' , _ ieee trans .sig . process .277 2790 , june 2013 .
energy harvesting for wireless communication networks is a new paradigm that allows terminals to recharge their batteries from external energy sources in the surrounding environment . a promising energy harvesting technology is wireless power transfer where terminals harvest energy from electromagnetic radiation . thereby , the energy may be harvested opportunistically from ambient electromagnetic sources or from sources that intentionally transmit electromagnetic energy for energy harvesting purposes . a particularly interesting and challenging scenario arises when sources perform simultaneous wireless information and power transfer ( swipt ) , as strong signals not only increase power transfer but also interference . this paper provides an overview of swipt systems with a particular focus on the hardware realization of rectenna circuits and practical techniques that achieve swipt in the domains of time , power , antennas , and space . the paper also discusses the benefits of a potential integration of swipt technologies in modern communication networks in the context of resource allocation and cooperative cognitive radio networks .
recent statistical developments in the assessment of space time point process models have resulted in new , powerful model evaluation tools .these tools include residual point process methods such as thinning , superposition and rescaling , comparative quadrat methods such as pearson residuals and deviance residuals , and weighted second - order statistics for assessing particular features of a model such as its background rate or the degree of spatial clustering .unfortunately , these methods have not yet become widely used in seismology .indeed , recent efforts to assess and compare different space time models for earthquake occurrences have led to developments such as the regional earthquake likelihood models ( relm ) project [ ] and its successor , the collaboratory for the study of earthquake predictability ( csep ) [ ] .the relm project was initiated to create a variety of earthquake forecast models for seismic hazard assessment in california .unlike previous projects that were addressing earthquake forecast modeling for seismic hazard assessment , the relm participants decided to develop a multitude of competing forecasting models and to rigorously and _ prospectively _ test their performance in a dedicated testing center [ ] . with the end of the relm project , the forecast models became available and the development of the testing center was done within the scope of csep .csep inherited not only all models developed for relm and is testing them for the previously defined period of 5 years , but also a suite of forecast performance tests that was developed during the relm project . in relm, a community consensus was reached that all models will be tested with these tests [ , ] .the tests include the number or n - test that compares the total forecasted rate with the observation , the likelihood or l - test that assesses the quality of a forecast in the likelihood space , and the likelihood - ratio or r - test that compares the performance of two forecast models .however , over time several drawbacks of these tests were discovered [ ] and the need for more and powerful tests became clear to better discern between closely competing models .the n - test and l - test simply compare the quantiles of the total numbers of events in each bin or likelihood within each bin to those expected under the given model , and the resulting low - power tests are typically unable to discern significant lack of fit unless the overall rate of the model fits extremely poorly .further , even when the tests do reject a model , they do not typically indicate _ where _ or _when _ the model fits poorly , or how it could be improved .the purpose of the current paper is to review modern model evaluation techniques for space time point processes and to demonstrate their use and practicality on earthquake forecasting models for california .the relm project represents an ideal test case for this purpose , as a variety of relevant , competing space time models are included , and these models yield genuinely prospective forecasts of earthquake rates based solely on prior data .the rates are specified per bins which are spatial - magnitude - temporal volumes ( called pixels in the statistical domain ) .these bins have been predefined in a community consensus process in order to have the model forecast rates in the exact same bins .the models forecasts translate into strongly different estimates of seismic hazard .its accurate estimation is important for seismic hazard assessment , urban planning , disaster preparation efforts and in the pricing of earthquake insurance premiums [ ] , so distinguishing among competing models is an extremely important task . in section [ sec2 ]we describe a group of earthquake forecast models to be evaluated , along with the observed earthquake occurrences used to assess the fit of the models .the methods currently used by seismologists for model evaluation are briefly reviewed in section [ sec3 ] .pixel - based residuals for model comparison are discussed in section [ sec4 ] . in section [ sec5 ]weighted second - order statistics , primarily the weighted k - function , are investigated .section [ sec6 ] reviews various residual methods based on rescaling , thinning and superposition , and introduces and applies the method of super - thinning .section [ sec7 ] summarizes some of the benefits and weaknesses of these tools .csep expanded and now collects and evaluates space time earthquake forecasts for different regions around the world , including california , japan , new zealand , italy , the northwest pacific , the southwest pacific and the entire globe .the forecasts are evaluated in testing centers in japan , switzerland , new zealand and the united states . the u.s .testing center is located at the southern california earthquake center ( scec ) and hosts forecast experiments for california , the northwest and southwest pacific , and the global experiments .we have chosen to apply a variety of measures to assess the fit of a collection of the california forecast models currently being tested at scec .the forecast models are arranged in classes according to their forecast time period : five - year , three - month and one - day .there are two types of forecasts , rate - based and alarm - based . within the five - year groupare a set of rate - based models developed as part of the relm project . in this paperwe evaluate the relm project rate - based one - day and five - year models , and will be ignoring the three - month models due to their very recent introduction to the csep testing center .all csep forecasts are grid - based , providing a forecast in each spatial - magnitude bin within a given time window .for the one - day models , each bin is of size longitude ( lon ) by latitude ( lat ) by units magnitude for earthquake magnitudes ranging from 3.95 to 8.95 . for magnitudes 8.9510, there is a single bin of size by by units of magnitude .the relm forecasts are identical , except with a lower magnitude bound of 4.95 instead of 3.95 . for each bin, an expected number of earthquakes in the forecast period is forecasted .there are five models in the relm project that are considered mainshock aftershock models .these models forecast both mainshocks and aftershocks with a single forecast for a period of five years .models proposed in and , which we will call models a and b , respectively , base their forecasts exclusively on previous seismicity . the model proposed in , denoted model c here , is based on other geodetic or geological data .all relm models are five - year forecasts , beginning 1 january 2006 , 00:00 utc and ending 1 january 2011 , 00:00 utc .csep is also testing two one - day forecast models : the epidemic - type aftershock sequences ( etas ) model [ , ] and the short - term earthquake probabilities ( step ) model [ ] since september of 2007 .both of these models produce forecasts based exclusively on prior seismicity .csep evaluates the relm models using a lower magnitude cutoff of 4.95 .because there are so few earthquakes of magnitude 4.95 and higher in the catalog over the observed period we use a lower magnitude cutoff of 3.95 instead .the forecasts for models a , b and c were extrapolated using each model s fitted magnitude distribution .models a and b assume the magnitude distribution follows a tapered gutenberg richter law [ ] with a _b_-value of 0.95 and a corner magnitude of 8.0 .model c uses a _b_-value of 0.975 and the same corner magnitude . model a adjusts the magnitude distribution in a small region in northern california influenced by geothermal activity ( 122.9.7 and .9 ) by using a _b_-value of 1.94 instead of 0.95 .earthquake catalogs containing the estimated earthquake hypocenter locations and magnitudes were obtained from the advanced national seismic system ( anss ) . from 1 january 2006 to 1 september 2009 there were 142 shallow earthquakes with a magnitude of 3.95 or larger which occurred in relm s spatial - temporal window ( see figure [ alleqs ] ) . in the relm testing region . ]note that each relm model does not necessarily produce a forecasted seismicity rate for every pixel in the space time region .hence , each model essentially has its own relevant spatial - temporal observation region , and thus we may have different numbers of observed earthquakes corresponding to different models .for instance , all 142 recorded earthquakes from 1 january 2006 to 1 september 2009 corresponded to pixels where model a made forecasts , but only 81 corresponded to pixels where model b made forecasts , and 86 where model c made forecasts .85 earthquakes of magnitude 3.95 or greater occurred since 1 september of 2007 , all of which corresponded to forecasts made by etas but only 83 of which corresponded to forecasts made by step .csep initially implemented two numerical summary tests , called the likelihood - test ( l - test ) and the number - test ( n - test ) , to evaluate the fit of the earthquake forecast models they collect .a full description of these methods can be found in .these goodness - of - fit tests are similar to other numerical goodness - of - fit summaries such as the akaike information criterion [ ] and the bayesian information criterion [ ] in that they provide a score for the overall fit of the model without indicating where the model may be fitting poorly .the l - test , described in , works by first simulating some fixed number of realizations from the forecast model . the log - likelihood ( )is computed for the observed earthquake catalog ( ) and each simulation ( , for ) .the quantile score , , is defined as the fraction of simulated likelihoods that are less than the observed catalog likelihood : where denotes the indicator function .if is close to zero , then the model is considered to be inconsistent with the data , and can be rejected . otherwise , the model is not rejected and further tests are necessary .the n - test is similar to the l - test , except that the quantile score examined is instead the fraction of simulations that contain fewer points than the actual observed number of points in the catalog , .that is , where is the number of points in the simulation of the model . with the n - test, the model is rejected if is close to or .if a model is underpredicting or overpredicting the total number of earthquakes , then or , respectively , and the model will likely be rejected with the n - test .table [ tab ] shows results for the l- and n - test for selected models .the l - test would lead to rejection of models a , b , c and step as seen by the very low scores .the etas model would not be rejected based on the score alone , requiring the application of the n - test for a final decision . at the level of significance ,the scores indicate that the step model is underpredicting the total number of earthquakes , while models a , b , c and etas are significantly overpredicting earthquake rates . @ * model * & & & & + + [ 4pt ] a. helmstetter & .46 & 0.000 & 142 & * 0.000 * + b. kagan & .43 & 0.008 & 81 & * 0.001 * + c. shen & .20 & 0.002 & 86 & * 0.043 * + [ 4pt ] + [ 4pt ] etas & .69 & 1.00 & 85 & * 0.00 * + step & .43 & 0.00 & 83 & * 0.99 * + unfortunately , in practice , both statistics and test essentially the same thing , namely , the agreement between the observed and modeled _ total _ number of points . indeed , for a typical model , the likelihood for a given simulated earthquake catalog depends critically on the number of points in the simulation .baddeley et al . ( )introduced methods for residual analysis of purely spatial point processes , based on comparing the total number of points within predetermined bins to the number forecast by the model .such methods extend readily to the spatial - temporal case , and are quite natural for evaluating the csep forecasts since the models are constrained to have a constant conditional intensity within prespecified bins .the differences between observed and expected numbers of events within bins can be standardized in various ways , as described in what follows .earthquake occurrence times and locations are typically modeled as space time point processes , with the estimated epicenter or hypocenter of each earthquake representing its spatial location . along with each observation, one may also record several _ marks _ which may be used in the model to help forecast future events ; an important example of a mark is the magnitude of the event . space time point process models are often characterized by their associated conditional intensity , , that is , the infinitesimal rate at which one expects points to occur around time and location , given full information on the occurrences of points prior to time , and given the marks and possibly other covariate information observed before time .note that due to the lack of a natural ordering of points in the plane , purely spatial point processes are typically characterized by their papangelou intensities [ ] , which may be thought of as the limiting rate at which points are expected to accumulate within balls centered at location given what _ other _ points have occurred at all locations outside of these balls , as the size of the balls shrink to zero . for a review of point processes and conditional intensities ,see .an aggregate conditional intensity is derived for each spatial bin for all models by summing the forecast rates over all magnitude bins and then dividing the sum by the area of each pixel .since we are evaluating the five - year models a , b and c after only 44 of the 60 months of the forecast period have elapsed , their conditional intensities are scaled by a factor of .consider a model for the conditional intensity at any time and location ._ raw residuals _ may be defined following as simply the number of observed points minus the number of expected points in each pixel , that is , where is the number of points in bin .note that consider only the case of purely spatial point processes characterized by their papangelou intensities ; showed that one may nevertheless extend the definition to the spatial - temporal case using the conventional conditional intensity as in ( [ rawres ] ) .one may wish to rescale the raw residuals in such a way that they have mean 0 and variance approximately equal to 1 .the _ pearson residuals _ are defined as for all .these are analogous to the pearson residuals in poisson log - linear regression .both step and model c have several pixels with forecasted conditional intensities of 0 , which complicates the standardization of the corresponding residuals for these two models .pearson residuals were obtained for each of the remaining models .for instance , figure [ kaganpearson ] shows that the largest pearson residual for model b is 2.817 located in a pixel in mexico , just south of the california border near the imperial valley fault zone ( and ) , which is the location of a large cluster of earthquakes .another very large residual for model b can be seen just above the san bernardino and inyo county border near the panamint valley fault zone ( and ) .this is also the location of the largest etas pearson residual ( 2.221 ) .the largest pearson residual for model a ( 4.068 ) is located at a small earthquake cluster near the peterson mountain fault northwest of reno , nevada ( and ) .note that when spatial - temporal bins are very small and/or the estimated conditional intensity in some bins is very low , as in this example , the raw and especially the standardized residuals are highly skewed . in such cases , the residuals in such pixels where points happen to occur tend to dominate , and the skew may complicate the analysis . indeed , pearson residuals fail to provide much useful information about the model s fit in the other pixels where earthquakes did not happen to occur , and graphical displays of the pearson residuals tend to highlight little more than the locations of the earthquakes themselves .therefore , while pearson and raw residuals may help to identify individual bins containing earthquakes that require an adjustment in their forecasted rates , pearson and raw residuals generally fail to identify other locations where the models may fit relatively well or poorly .a useful method for comparing models is using the deviance residuals proposed by , in analogy with deviances defined for generalized linear models in the regression framework . as with pearson residuals, is divided into evenly spaced bins , and the differences between the log - likelihoods within each bin for the two competing models are examined .given two models for the conditional intensity , and , the deviance residual in each bin , , of against is given by positive residuals imply that the model fits better in the given pixel and negative residuals imply that provides better fit . by simply taking the sum of the deviance residuals , , we obtain a log - likelihood ratio score , giving us an overall impression of the improvement in fit from the better fitting model .if or is estimated , then one may use this estimate in computing the deviance residuals , and similarly if or is given , that is , not estimated , then one would simply use this given model in computing the residuals .figure [ devcombined](a ) shows the deviance residuals for model a versus model b. model a outperforms model b in almost all locations where earthquakes actually occurred , and , in particular , model a forecasts the imperial earthquake cluster and another cluster near the laguna salada and yuha wells faults just north of the california mexico border ( and ) much better than model b. the pixel with the largest residual , highlighted in figure [ devcombined](b ) , is located in the imperial cluster .model b seems to fit better in several selected areas , mostly regions close to known faults but where earthquakes did not happen to occur in the time span considered . in most locations , however , including the vast majority of locations far from seismicity , model a offers better fit , as model b tends to overpredict events in these locations more than model a. overall , the log - likelihood ratio score is 84.393 , indicating a significant improvement from model a compared to model b. 7.468 . ]results are largely similar for model a versus model c , as seen in figure [ devcombined2](a ) , with model a forecasting the rate at all observed earthquake clusters , including a cluster at the extreme southern end of the observation region on the baja , mexico peninsula ( and ) , more accurately than model c. overall , model a offers substantial improvement over model c with a likelihood ratio score of 86.427 .residuals for model b versus model c can be seen in figure [ devcombined2](b ) .model c forecasts the rate near the imperial cluster better , and model b forecasts more accurately around the laguna salada cluster .there are vast regions where model b outperforms model c and vice versa .overall , model c fits slightly better than model b , with a likelihood ratio score of .468 .deviance residuals for etas versus step ( not shown ) reveal that the etas model performs somewhat better for this data set overall , with a log - likelihood ratio score of 76.261 , providing substantially more accurate forecasts in nearly all locations , especially where earthquakes occur .a common model assessment tool used for detecting clustering or inhibition in a point process is ripley s k - function [ ] , defined as the average number of points within of any given point divided by the overall rate , and is typically estimated via where is the area of the observation region , is the total number of observed points , and is the proportion of area of the ball centered at and passing through that falls within the observation region [ see , ] . for a homogeneous poisson process in , k , suggested a variance stabilized version of the k - function , called the l - function , given by l .the null hypothesis for most second - order tests such as ripley s k - function is that the point process is a homogeneous poisson process . argues that this is a poor null hypothesis for the case of earthquake occurrences because a homogeneous poisson model fits so poorly to actual data . described a variety of weighted analogues of second - order tests that are useful when the null hypothesis in question is more general .most useful among these is the weighted analogue of ripley s k - function , first introduced by .they discussed the case where the null model , can be any inhomogeneous poisson process , and this was extended by to the case of non - poisson processes as well .the weighted k - function is useful for testing the degree of clustering in the model , and was used by to assess a spatial point process model fitted to southern california earthquake data .the standard estimate of the weighted k - function is given by where ( ) , is the indicator function , and is the conditional intensity at point under the null hypothesis .edge - corrected modifications can also be used , especially when the observed space is irregular . proposed a local empirical k - function which can assess lack - of - fit in subsets of and can be compared to the weighted k - function applied globally to . here , we apply the weighted k - function globally to derive an overall impression of each model s lack of fit .as with ripley s k - function , under the null hypothesis , for a spatial point process with intensity , [ ] . to obtain a centered and standardized version , one can also transform the weighted k - function into a weighted l - function as before , and plot versus .space time versions of the l - function have been proposed , but for the purpose of examining , in particular , the range and degree of purely spatial clustering in each model , it seems preferable to apply the purely spatial weighted l - function previously described , after first integrating the conditional intensities of the etas and step models over time .figure [ allwk ] shows the estimated centered weighted l - functions for the five models considered here , along with 95% confidence bounds based on the normal approximation in , who showed that asymptotically , the distribution of the weighted k - function should generally obey ^ 2}\biggr).\ ] ] the catalog of observed earthquakes is significantly more clustered than would be expected according to model a , especially within distances of degrees of longitude / latitude , or approximately km .however , at distances greater than , or approximately km , the observed data exhibit greater inhibition than one would expect according to model a. this suggests that model a is underpredicting the degree of clustering in the observed seismicity and may be generally underpredicting the seismicity rate within highly active seismic areas , and may be overpredicting seismicity elsewhere .results are similar for model b and the etas model .the estimated l - function for model c shows significantly more clustering of the ( weighted ) seismicity than one would expect within distances of or km , that is , model c is significantly underpredicting the degree of clustering within this range , but seems consistent with the data outside of this range .the estimated l - function shows clear discrepancies between the step model and the data , as the ( weighted ) seismicity is significantly more clustered than one would expect according to the model at both small and large distances .these results are not surprising considering that step tends to underpredict seismicity overall : according to the step forecasts , one would expect only 63 earthquakes in total during the period in which 85 occurred .by contrast , etas tends to overpredict the overall rate , forecasting more than 114 earthquakes in this same period .as shown in section [ sec42 ] , when the spatial - temporal pixels are small , the distribution of raw and pearson residuals tend to be highly skewed , and this limits their utility .when pixels are larger , however , a drawback of pixel - based residuals is that considerable information is lost in aggregating over the pixels .instead , one may wish to examine the extent to which the data and model agree , without relying on such aggregation .one way to perform such an assessment is to transform the points of the process , by rescaling , thinning , superposition or superthinning , to form a new point process that should be a homogeneous poisson process if and only if the model used to govern this transformation is correct .the residual points can then be assessed for inhomogeneity as a means of evaluating the goodness of fit of the underlying model . observed that the temporal coordinates of a multivariate point process can be rescaled according to the integrated conditional intensity in order to form a sequence of stationary poisson processes . for a space time point process , one may thus rescale one axis , for example , the -axis , moving each observation to the new rescaled position , and assess the space time homogeneity of the resulting process .this sort of method was used by for model evaluation for the purely temporal case and by for the spatial - temporal case .the spatial homogeneity of these residual points may be assessed , for instance using ripley s k - function .if is spatially volatile , the transformed space bounding the rescaled residuals can be highly irregular , which makes it difficult to detect uniformity using the k - function . in this case, one can rescale the points along a different axis as in and see if there is any improvement . unfortunately, most csep forecast models have volatile conditional intensities , resulting in a highly irregular boundary regardless of which axis is chosen for rescaling . in such cases ,the k - function is dominated by boundary effects and has little power to detect excessive clustering or inhibition in the residuals .figure [ combresc ] shows the rescaled residuals for models b and c , which had the most well behaved of the rescaled residuals for the five models we considered .there is significant clustering in both the vertically and horizontally rescaled residuals for all five models , apparently due to clustering in the observations not adequately accounted for by the models , the most noticeable of which is the very large imperial cluster .one must be somewhat cautious , however , in interpreting rescaled residuals , because patterns observed in the points in the rescaled coordinates may be difficult to interpret .thinned residuals are a modification to the simulation techniques used by and , and , as shown in , are useful for assessing the spatial fit of a space time point process model and revealing locations where the model is fitting poorly . unlike rescaled residuals , thinned residuals have the advantage that the coordinates of the points are not transformed and , thus , the resulting residuals may be easier to interpret . to obtain thinned residuals ,each point is kept independently with probability where is the infimum of the estimated intensity over the entire observed space time window , .the remaining points , called _ thinned residual points _ , should be homogeneous poisson with rate if and only if the fitted model for is correct [ ] . for this method to have sufficient power ,several realizations of thinned residuals can be collected , each realization being tested for uniformity using the k - function , and then all k - functions may be examined together to get the best overall assessment of the model s fit . when applied to the csep earthquake forecasts, tends to be so small that thinning results in very few points ( often zero ) being retained .one can instead obtain _ approximate thinned residuals _ by forcing the thinning procedure to keep , on average , a certain number , , of points by keeping each point with probability as in . ) .top - center panel : model b ( ) .top - right panel : model c ( ) .bottom - left panel : etas ( ) .bottom - right panel : step ( ) . ] ) .top - center panel : model b ( ) .top - right panel : model c ( ) .bottom - left panel : etas ( ) .bottom - right panel : step ( ) . ] typical examples of approximate thinned residuals for the five models we consider , using and for models a , b , c , etas and step , respectively , are shown in figure [ thinplotsall ] .excessive clustering or inhibition in the residual process , compared with what would be expected from a homogeneous poisson process with overall rate , indicates lack of fit . to test the residuals for homogeneity, one may apply the weighted k - function to the residuals , with for all points .this is equivalent to using the unweighted version of the k - function on the residuals , except that here the overall rate is , whereas with the conventional unweighted k - function , the overall rate is typically estimated as .the estimated centered weighted l - functions for each model , along with the 95%-confidence bands based on [ wkbounds ] , are shown in figure [ thinplotsallwk ] .models a and step most noticeably fail to thin out the small cluster near the peterson mountain fault northwest of reno , nevada , and another small cluster in northern california that occurs approximately 35 kilometers south of the battle creek fault ( and ) .this residual clustering is significant , as shown by the weighted l - functions in figures [ thinplotsallwk](a ) and ( e ) .model b has trouble forecasting the imperial cluster , as evidenced by the significant clustering at distances up to 0.6 . the residuals for both modelsc and etas appear to be closer to uniformly distributed throughout the space , though further investigation of several realizations of thinned residuals reveals that model c has trouble thinning out the baja , california cluster , which leads to some significant clustering in the residuals at very small distances .superposition is a residual analysis technique similar to thinned residuals , but instead of removing points , one simulates new points to be added to the data and examines the result for uniformity .this procedure was proposed by , but examples of its use have been elusive .points are simulated at each location according to a cox process with intensity , where . as with thinning and rescaling, if the model for is correct , the union of the superimposed residuals and observed points will be homogeneous poisson .any patterns of inhomogeneity in the residuals aid us in identifying spots where the model fits poorly .superposition helps solve one of the biggest disadvantages of thinned residuals : the lack of information on the goodness of fit of the model in locations where no events occur . however ,if is large , then there is a possibility that too many points will be simulated , meaning that the behavior of the k - function will be primarily influenced by simulated points rather than actually observed data points . for models a and step , for example , simulated points comprise% of the total points after superposition . for models c and etas , simulated points comprise% of the superposed residual points .see figure [ shenxtsuper ] for an example of superposed residuals for model c. since the test for uniformity is based almost entirely on the simulated points , which are by construction approximately homogeneous for large , the test has low power for model evaluation in such situations . observed earthquakes ; plus signs points ) .right panel : estimated centered weighted l - function for superposed residuals ( solid line ) and 95%-confidence bounds ( dashed lines ) . ] a realization of superposed residuals for model b can be seen in figure [ kaganxtsuperwk ] , along with the corresponding centered weighted l - function as a test for homogeneity of the residuals .95%-confidence bands for the l - function are constructed under the null hypothesis for all points .the superposed residuals are significantly more clustered than would be expected , up to distances of 0.4 , or approximately 44.4 km .this is likely the result of the underprediction of the seismicity rate in the imperial cluster .one also observes significantly more inhibition in the superposed residuals than would be expected at distances greater than 0.5 , or approximately 55.5 km .this inhibition can most likely be attributed to the model s overprediction of the seismicity rate in areas devoid of earthquakes , which can be seen in the portions of figure [ kaganxtsuperwk](a ) in various regions lacking both simulated and observed points .a more powerful approach than thinning or superposition individually is a hybrid approach where one thins in areas of high intensity and superposes simulated points in areas of low intensity , resulting in a homogeneous point process if the model for used in the thinning and superposition is correct .the benefit of this method , called super - thinning by , is that the user may specify the overall rate of the resulting residual point process , , so that it contains neither too few or too many points . in super - thinning ,one first keeps each observed point in the catalog independently with probability and subsequently superposes points generated according to a simulated cox process with rate . the result is a homogeneous poisson process with rate if and only if the model for the conditional intensity is correct [ ] and , hence , the resulting super - thinned residuals can be assessed for homogeneity as a way of evaluating the model .in particular , any clustering or inhibition in the residual points indicates a lack of fit. observed earthquakes ; plus signs points ) .top - left panel : model a ( ) .top - center panel : model b ( ) .top - right panel : model c ( ) .bottom - left panel : etas ( ) .bottom - right panel : step ( ) . ] .top - left panel : model a ( ) .top - center panel : model b ( ) .top - right panel : model c ( ) .bottom - left panel : etas ( ) .bottom - right panel : step ( ) . ] in the application to earthquake forecasts , a natural choice for is the total number of expected earthquakes according to each forecast .figure [ allsuperthin ] shows one realization of super - thinned residuals for each model , and figure [ allsuperthinwk ] shows the estimated centered weighted l - functions for the corresponding residuals , with for all points , along with 95%-confidence bands .model a appears to fit rather well overall , with some significant clustering in the residuals at very small distances ( from 0 to 0.1 ) most likely attributable to the same small clusters that remained in the thinned residuals .however , the l - function in figure [ allsuperthinwk](a ) reveals that there is somewhat more inhibition in the residual process than we would expect .this is likely attributable to model a s overprediction of the seismicity rate especially in inter - fault zones .the super - thinned residuals for model b contain a few significant clusters ( imperial , laguna salada and panamint ) and some slight inhibition due to overprediction of seismicity in two regions devoid of any simulated points or retained earthquakes : the san diego - imperial county areas and the los angeles san bernardino areas .there is also significant clustering for model c up to distances of 0.2 , particularly the laguna salada , baja and panamint clusters .the etas residuals contain significant clustering at distances up to 0.1 , and this is largely attributable to the imperial cluster and to clusters in peterson mountain and the mt .konocti area near clearlake , california at and .the step residuals exhibit significant clustering at distances up to 0.4 , with obvious clustering at imperial , peterson mountain , battle creek , mt . konocti and the mendocino fault zone off the coast of northwest california .a litany of residual analysis methods for spatial point processes can be implemented to assess the fit and reveal weaknesses in point process models , and many of these methods provide more reliable estimates of the overall fit and more detailed information than the l - test and n - test .rescaled residuals can assist in the evaluation of the overall spatial fit , but are not easily interpretable due to the transformed spatial window .thinned residuals are much more easily interpretable , but suffer from variability in the thinned residual point pattern and low power if is too small .superposition is similar to thinning in that it also suffers from sampling variability and low power in the case of a very large supremum of .super - thinning appears to be a promising alternative , but , like superposition , may have low power if the modeled intensity is extremely volatile .deviance residuals and weighted second - order statistics appear to be quite powerful , especially for comparisons of competing models .clearly , the availability of a larger number of observed earthquakes in the tests would lead to more detailed and more meaningful results , and this suggests further decreasing the lower magnitude threshold .however , considerations of catalog incompleteness at lower magnitudes , as well as the fact that not all forecast models in the study are capable of forecasting small events and their spatial - temporal fluctuations , lead to limits on how low one may place the lower magnitude threshold for the catalog . indeed, lowering the threshold requires stronger time - dependence of the models to account for the short - term fluctuations of microseismicity . due to these considerations, csep sets the lower magnitude threshold in most cases to 3.95 for the time - varying models like step and etas .overall , model a seems to be overpredicting seismicity at the time of testing , but this may change once the forecast period is complete if there is a greater amount of seismic activity .models b and c appear to be significantly underpredicting seismicity in many locations , and unless the seismic activity in these regions slows down considerably , these models will continue to underpredict for the remainder of the forecast period .the spatial distribution of model a is quite accurate , coupling forecasts of high conditional intensity in areas along active faults with very low intensity forecasts in areas adjacent to these faults which typically are devoid of earthquakes .models b and c have smooth spatial distributions yielding erroneously high forecasts at distances far from any faults .the question of what choice of is optimal in thinning or super - thinning remains open for future research .ideally , should be chosen such that a poorly fitting model is rejected with high probability , while a `` correct '' or satisfactorily fitting model is rejected with low probability ( i.e. , the type i error probability , , is small ) .when thinning , we lose information when points are removed , so we prefer to keep as many points as possible , while keeping low . with super - thinning , we would also ideally want to retain many of the original points while simulating few points , so that any assessment of the homogeneity of the residuals is not highly dependent on the simulations .simulation and theoretical studies are needed in the future to compare the power of these goodness - of - fit measures under various hypotheses .we thank yan kagan and alejandro veen for helpful comments , the advanced national seismic system for the earthquake catalog data , and the collaboratory for the study of earthquake predictability and the southern california earthquake center for supplying the earthquake forecasts .
modern , powerful techniques for the residual analysis of spatial - temporal point process models are reviewed and compared . these methods are applied to california earthquake forecast models used in the collaboratory for the study of earthquake predictability ( csep ) . assessments of these earthquake forecasting models have previously been performed using simple , low - power means such as the l - test and n - test . we instead propose residual methods based on rescaling , thinning , superposition , weighted and deviance residuals . rescaled residuals can be useful for assessing the overall fit of a model , but as with thinning and superposition , rescaling is generally impractical when the conditional intensity is volatile . while residual thinning and superposition may be useful for identifying spatial locations where a model fits poorly , these methods have limited power when the modeled conditional intensity assumes extremely low or high values somewhere in the observation region , and this is commonly the case for earthquake forecasting models . a recently proposed hybrid method of thinning and superposition , called super - thinning , is a more powerful alternative . the weighted k - function is powerful for evaluating the degree of clustering or inhibition in a model . competing models are also compared using pixel - based approaches , such as pearson residuals and deviance residuals . the different residual analysis techniques are demonstrated using the csep models and are used to highlight certain deficiencies in the models , such as the overprediction of seismicity in inter - fault zones for the model proposed by helmstetter , kagan and jackson [ _ seismological research letters _ * 78 * ( 2007 ) 7886 ] , the underprediction of the model proposed by kagan , jackson and rong [ _ seismological research letters _ * 78 * ( 2007 ) 9498 ] in forecasting seismicity around the imperial , laguna salada , and panamint clusters , and the underprediction of the model proposed by shen , jackson and kagan [ _ seismological research letters _ * 78 * ( 2007 ) 116120 ] in forecasting seismicity around the laguna salada , baja , and panamint clusters . , + and .
one of the basic principle of fluid mechanics is the so - called `` reynolds similarity principle '' : no matter their composition , size , nature , different flow obeying the same equations with the same control parameters will follow the same dynamics .this principle has been used a lot in engineering to built e.g. prototypes of bridges to be tested in wind tunnels before construction . to obtain easy - to - use prototypes with realistic control parameters ,one then decreases the size but increases the velocity of the in - flowing wind so as to keep constant the reynolds number , controlling the dynamics of the flow .this principle could also be of great interest for certain astrophysical flows , whose dynamics could well be approached by simple laboratory flows .a good example is circum - stellar disk . in , it has been shown that under simple , but founded approximations , their equation of motions were similar to the equation of motion of an incompressible rotating shear flow , with penetrable boundary conditions and cylindrical geometry .this kind of flow can be achieved in the couette - taylor flow , a fluid layer sheared between two coaxial cylinders rotating at different speed , while penetrable boundary conditions can be obtained using porous material . on more general grounds ,the taylor - couette device is also an excellent prototype to study transport properties of most astrophysical or geophysical rotating shear flows : depending on the rotation speed of each cylinder , one can obtain various flow regimes with increasing or decreasing angular velocity and/or angular momentum .the taylor - couette flow is a classical example of simple system with complex and rich stability properties , as well as prototype of anisotropic , inhomogeneous turbulence .it has therefore motivated a great amount of laboratory experiments , and is even the topic of a major international conference .tagg ( see http://carbon.cudenver.edu/rtagg ) has conducted a bibliography on taylor - couette flow , which gives a good idea of the prototype status of this flow . here , we make use of the many results obtained so far for the taylor - couette experiment , regarding transition to turbulence , or turbulence properties to propose a practical prescription for the turbulent viscosity as a function of the radial position and the control parameters .it reads where are the control parameters ( section [ contro ] ) , is the torque measured when only the inner cylinder is rotating ( section [ torque - out ] ) , is a universal function provided in section [ torque - out ] , and is the ratio of the laminar to the mean shear , which encodes all the radial dependence as illustrated in section [ torque - out ] . and are typical shear and radius of the considered flow .most of the results we use here have been published elsewhere , except recent experimental results obtained by richard .our work therefore completes and generalizes the approach pioneered by zeldovich , with subsequent contributions by , in which usually only one aspect of the experiments has been considered .an application of these findings to circumstellar disks using the reynolds similarity principle can be found in hersant et al . thereby providing a physical explanation of several observable indicators of turbulent transport .the taylor - couette flow is obtained in the gap between two coaxial rotating cylinders of radii , rotating at independent velocities . for the purpose of generality and to allow further comparison with astrophysical flows , the velocity field at the inner cylinder boundary may have a non - zero radial component .the hydrodynamic equation of motions for an incompressible flow are given by : _t * u*+*u*&=&-p+ , + & = & 0 . [ equans ] where and are respectively the fluid density and kinematic viscosity , is the velocity , and is the pressure . equation ( [ equans ] ) admits a simple basic stationary solution , with axial and translation symmetry along the cylinders rotation axis ( the velocity only depends on ) .it is given by a flow with zero vertical velocity , and radial and azimuthal velocity given by : u_r&= & , + u_&=&a r^1++ , [ solutheta ] where and are constants and .this basic laminar state depends on three constants , and , which can be related to the rotation velocities at the inner and outer boundaries : a&=&(_o-^2_i ) , + b&=&(_i-_o^ ) , [ clgeneral ] where and is the radial reynolds number , based on the radial velocity through the wall of the inner cylinder . ; : ; ; : ; : .the upper panel is with ; the lower panel is with .the radius ratio has been arbitrarily fixed at ,width=264 ] the radial circulation is quantified by the value of .it is positive for outward motions . for impermeable cylinders , and one has the `` classical '' taylor - couette flows . for a porous internal cylinder ,one obtains a taylor - couette flow with radial circulation .the strength of the radial circulation can be controlled by using more or less porous cylinders ) .[ fig : profile - circ.eps ] provides an example of the influence of the radial circulation on the azimuthal profile . in practice , even for impermeable cylinders , the flow is not purely azimuthal . because of the finite vertical extent of the apparatus , a large - scale ekman circulation is established through the effect of the top and bottom boundaries .this circulation depends on the ratio of radii and velocities , and on the top and bottom boundary conditions .its signature is easy to detect by profile monitoring , or by measuring the difference between the torque at the inner and outer cylinder .of course this circulation is both radial and vertical and it varies along the cylinders axis .also its intensity is not easy to control , since it is not fixed externally , but results from a non - trivial equilibrium within the flow .still , at a given axial position , one may estimate this intensity by a fit of laminar profile using ( [ solutheta ] ) and ( [ clgeneral ] ) . to simplify the exploration of the parameter space, we shall restrict ourselves to the case of , and study separately the influence of this parameter . in the laboratory , minimizing circulation effect is achieved by working with tall cylinders and consider only a fraction of the flow located at a distance to the top of about 1/3 of the total height , where the radial velocity is expected to be the weakest . specific influence of on stability and transport properties will be considered in section [ stab - rad ] and [ torque - rad ] .dimensional considerations show that there are only four independent non - dimensional numbers to characterize the system , which can be chosen in various ways .the traditional choice is to consider as the unit length , and as the unit time . with this choice ,the dimensionless equations of motions are : _ t * u^**+*u^ * * & = & -p^*+ , + = 0 , [ equansadim1 ] with boundary conditions : ^*(r_i)&=&((1-)/,r_i,0 ) + * u*^*(r_o)&=&((1-),r_o,0 ) [ cladim1 ] where : & = & , + & = & , + r_i&= & , + r_o&=&. [ param1 ] in the following , we will omit the star superscript indicating non - dimensional quantity .the present choice of unit amounts to define the control parameters by non - dimensional boundary conditions . when comparing flows that do not share identical geometry , it is of interest to identify control parameters characterizing the dynamical properties of the flows .in the case of rotating shear flows , it is convenient to write the equations in an arbitrary rotating frame with angular velocity , choose as unit length , the inverse of a typical shear as unit time and as a typical radius .furthermore , it is useful to introduce the `` advection shear term '' proposed in { \bm e}_\phi + { \bm w}.{\bm\nabla } ( w_z { \bm e}_z).\ ] ] so that the contribution of the mean flow derivative to the modified advection term vanishes when the flow is not sheared , for azimuthal axisymmetric flow . as a result , one has _t * w*+*w**w*&=&-- r_*e_z * + & & + r_c(_r - _ ) + & & + re^-1 , + & = & 0 , [ equansadim2 ] with boundary conditions : ( r_i , o)&=&*u*(r_i , o)- [ cladim2 ] where : re & = & + r_&= & + r_c & = & d / r [ param2 ] are the dynamical control parameters for a given radial circulation . is an azimuthal reynolds number , measuring the influence of shear . is a rotation number , measuring the influence of rotation .note that now also includes the centrifugal force term . in this general formulation ,one is free to choose .it is convenient to choose as a typical rate of rotation so that one can easily compare the taylor - couette case to the case of a plane shear in a rotating frame .for instance one can choose so that in order to restore the symmetry between the two walls boundary conditions .this choice of amounts to fix by . for consistency , it is then convenient to choose . in this context and with , it is easy to relate the above control parameters to the traditional choice : & = & , + re&=&= r_o - r_i , + r_&=&= ( 1- ) , + r_c&=&. [ controltc ] the above control parameters have been introduced so that their definition apply to rotating shear flows in general and not only to the taylor - couette geometry .it is very easy in this formulation to relate the taylor - couette flow to the plane couette flow with rotation , by simply considering the limit .also , in the astrophysical context , one often considers asymptotic angular velocity profiles of the form where then fully characterizes the flow . in that case , which is a simple relation to situate astrophysical profiles in the control parameters space of the taylor - couette flows . from the hydrodynamic viewpoint ,an important characteristic of the flow profile is the sign of the shear compared to the sign of the angular velocity , which defines cyclonic and anticylonic flows .for the co - rotating laminar taylor - couette flow , the sign of the local ratio is constant across the whole flow and is thus simply given by the sign of the rotation number ( for cyclonic flows and for anticylonic flows ) . finally let us recall that an analogy exist between taylor - couette and rayleigh - bnard convection ( see for details and for a review ) , which calls for an even larger generalization of the control parameters definition .figure [ fig : param ] displays the characteristic values taken by the new parameters in the usual parameter space for co - rotating cylinders .it also helps to situate cyclonic and anti - cyclonic flows , as well as prototypes of astrophysical flows .as usual when considering stability properties , one must distinguish stability against infinitesimal disturbances linear stability from that against finite amplitude ones non - linear stability . when the basic flow is unstable against finite amplitude disturbance , but linearly stable , it is called subcritical , by contrast with the supercritical case for which the first possible destabilization is linear ( see for further details ) . in the inviscid limit ( ) , and for axisymmetric disturbances ,the linear stability properties of the flow are governed by the rayleigh criterion .the fluid is stable if the rayleigh discriminant is everywhere positive : where is the specific angular momentum . applying this criterion to the laminar profile leads to since varies between and , one obtains that in the inviscid limit , the flow is unstable against infinitesimal axisymmetric disturbances when , where , respectively , are the marginal stability thresholds in the inviscid limit ( superscript ) in the cyclonic case ( , subscript ) , respectively anticyclonic case ( , subscript ) .these rayleigh limits are also displayed on figure [ fig : param ] , where they have to be seen as asymptotic . as a matter of fact, this information is rather poor : * non - axisymmetric disturbances can be more destabilizing than axisymetric ones , so that the flow could be linearly unstable in part of the linearly stable domain ; * viscous damping will probably reduce the linearly unstable domain ; * finally , finite amplitude disturbances may seriously reduce the stable domain . in the following , assuming that the axisymmetric disturbances are indeed the most dangerous one at the linear level whichup to now is validated both experimentally and numerically , we will consider the two last items .on one side , we will review the existing results on the effect of viscosity in the supercritical case , which will provide us with a critical reynolds number as a function of the other parameters . on the other side, we will investigate the subcritical stability limit , when the flow is linearly stable and try to figure out what is the behavior of the minimal reynolds number for self - sustained turbulence . ) .flows with positive gradient of angular momentum but negative ( resp .positive ) gradient of angular velocity are referred to as keplerian ( resp .stellar ) .the shaded area corresponds to rayleigh unstable flows ( supercritical case).,width=302,height=264 ] these boundaries can be estimated via different tools , depending of the type of experiment and available measurements . in numerical experiments , the simplest way to estimate the stability boundary in the linear case is through a modal decomposition and a monitoring of real part of the largest eigenvalue . in laboratory experiments , at least three different tools have been used : i ) torque measurements ; ii ) flow visualization ; iii ) mean velocity profile measurements .torque measurements have been traditionally used in the past .their advantage is their accuracy and their flexibility to detect other transition at larger reynolds numbers .their inconvenience is their difficulty of implementation in the case where both cylinders are rotating .flow visualizations allow discriminating between laminar and turbulent flows but suffer from the lack of quantitative information on the flow .mean velocity profile measurement is a third alternative , which allows determination of critical reynolds number from deviation of velocity profiles with respect to laminar value , or changes of regime .this technique is more local in nature , and requires advanced techniques of in - flow measurements . in the sequel , we shall use data from several sources , described in table [ tab_source ] .except for the data of richard , all of them have been published .those by richard are available in his thesis manuscript .we take the opportunity of this synthesis to integrate them in a larger perspective ..experimental data and sources [ cols="<,<,<",options="header " , ] numerous experimental set ups were used to study the stability boundary in the linear case , starting from the early experiments of couette , taylor , and donnelly and fultz .the viscosity damps the instability until , corresponding to the transition from the laminar flow to the so - called taylor vortices flow .figure [ fig : rc ] displays the numerical data by snyder , providing the stability threshold as a function of for three gap size ( ) , and illustrates the influence of the curvature on the instability threshold . the experimental data of prigent et al . , at is also reported .: ; : , . experimental data by prigent et al . : .continuous line : lezius and johnston plane couette or small gap limit stability criteria .dashed line : esser and grossman prediction for and .,width=302 ] as ( rotating plane couette limit ) , the stability curve becomes symmetric around and diverges at or .this is in agreement with the linear stability criterion for the rotating plane couette flow , a generalization of the first exact result giving the linear stability of the non - rotating plane couette flow for all reynolds number .the observed symmetry actually reflects symmetry in the rotating plane couette .the linearized equations of motions are invariant by the transformation exchanging streamwise and normal to the walls coordinates and velocities ( corresponding to exchanging with and and , in taylor - couette ). this transformation changes into , hence the symmetry around .when becomes smaller than , curvature enters into play and breaks the symmetry resulting in less and less symmetrical curves , as can be observed for .the above stability boundary can be recovered numerically by classical stability analysis , using e.g. normal mode analysis with numerical solutions .interestingly a very good approximate analytical formula in the whole parameter space has recently been derived by esser and grossmann .it is : r_c^2(r_+1)(r_+1-)= & & + -1708()^4 , & & [ getransforme ] with & & x()=1+(a ( ) ) , & & = ( -1 ) , + & & a()=(1-)(-)^-1 , [ getransforme2 ] where is a function equal to if and equal to if .continuous lines on figure [ fig : rc ] give a good insight on the validity of the above formula .let us underline that in the absence of the above formula , small and wide gap approximations were often used .whereas the small gap approximation works rather well until , the large gap one where is the heavyside function , remains a very poor approximation even for .note that the formula ( [ getransforme ] ) defines two critical rotation number for which the critical reynolds number diverges : and such that .this number has been computed for various and is shown on figure [ fig : romc ] .one sees that it is very well approximated by the formula .this remark is used in the next section . in this supercritical situation, the flow undergoes several other bifurcations following the first linear instability and turns into more and more complex patterns , eventually leading to turbulence .interestingly , at much larger reynolds number , an additional transition have been reported .one indeed observes a change in the torque dependence on the reynolds number , which could be associated with a featureless turbulence regime .sometime called `` hard turbulence '' , this regime is observed for . for reasons that will become clearer, we defer its discussion after study of the torque . as a function of the gap size . : computed from the analytic formula of esser and grossman .plain line : .,width=283,height=264 ] in the absence of general theory for globally subcritical transition to turbulence , the non - linear stability boundary has only been explored experimentally .wendt and taylor consider the case with inner cylinder at rest , corresponding to , at various gap size , using torque measurements . a more recent experiment by richard explores the domain and , at fixed gap size , using flow visualizations .finally one also has the measurement conducted in a rotating plane couette flow ( ) by tillmark and alfredsson for .the corresponding results are reported on figure [ fig : nonlin ] , giving as a function of for different value of .: taylor data with inner cylinder at rest and ; : richard data with ; : tilmark data for rotating plane couette flow ; dotted line : linear fit of tilmark s data ; plain line : linear fit of richard s data .anticyclonic flow : : richard data with ; plain line : linear fit of richard s data.,width=283,height=264 ] one must be very cautious when looking at this naive representation of the data , especially on the cyclonic side .first , the data are presented for different values of . especially for the data of taylor ( ) ,each point is a different .the fact that the data look aligned through all values of is an artifact of the representation as illustrated by the extrapolation of the linear fit of tillmark s data .second , looking at this figure seems to play a similar role in the cyclonic regime than in the anticyclonic case . as we have seen above , when studying the linear stability , this is true for only . as discussed in previous section ,the correct value of the marginal stability is approximately equal to the inviscid limit for the cyclonic case .taylor s data are actually given at this precise value of , because taylor performed his experiments with the internal cylinder at rest .this condition imposes , which _ coincides _ with the marginal stability limit . in the following, we shall try to extract from this data the maximal knowledge about the dependence of on and , both in the cyclonic and anticyclonic case .all the data about the manifold are obtained close to its intersection with the manifold .therefore one first has to estimate the locus of the intersection between these two manifolds that is , then the variation of with , close to the manifold , at the intersection .let us first consider the cyclonic case .one can take benefit of taylor s and wendt s data to estimate as proposed by richard and zahn .the fact that the data are read from the original figure of taylor and wendt however induces a natural error bar in the determination of the critical reynolds number , as illustrated on figure [ fig : f_eta ] , where several estimates , obtained by different authors , are reported . . : wendt s data ; : taylor s data ; : richard s data ; plain and dotted line : fit of ( see text for details ) .the size of the symbol denotes different estimate by richard and zahn ( small ) , zeldovich ( medium ) and present authors ( large ) based on published figures of taylor and wendt.,width=283 ] because of this error , it is difficult to give a precise fit of the function .one sees that the quadratic regime in given by and proposed by richard and zahn provides a good upper estimate of the function . a linear trend in , with slope gives a good lower estimate of the data for , as shown on figure [ fig : f_eta ] .clearly , more precise estimate of this function using modern data will be welcome .note that at , the function tends to a constant that is nothing but , the global stability threshold measured independently by tillmark and alfredsson and dauchot and daviaud in the non - rotating plane couette flow .the second step is to propose a linear development in , close to the above estimate : for one recovers the linear fit proposed by tillmark and alfredson ( plotted and extrapolated on fig [ fig : nonlin ] ) for the rotating plane couette flow : that is . for , the linear fit of richard s data ( plotted and extrapolated on fig [ fig : nonlin ] ) leads to . in the anticyclonic case ,the situation is simpler because , does not depend on . on the other hand , data are available for a unique value , so that one can not estimate .the only fit that can be performed in this state of experimental knowledge is : one finds and and the fit is displayed on figure [ fig : nonlin ] . in the anticyclonic regime , at least for this value of ,one recovers a dependence on the rotation similar to that of the plane couette flow .also remarkable is the fact that is so close to in the non - rotating case .altogether the data collected to date suggest that , in the linearly stable regime , the reynolds number of transition to subcritical turbulence be well represented by with , and .it is difficult to distinguish the effect of experimental procedures from the effects of gap width dependence in the present parameter range .the influence of radial circulation on the linear stability onset has been studied numerically by min and lueptow .they observed that an inward radial flow and strong outward flow have a stabilizing effect , while a weak outward flow has a destabilizing effect .we may use their data to get more precise estimates for the case , ( , keplerian case ) . figure [fig : rad ] shows the ratio as a function of for .one sees that the variation is quasi - linear . as a function of ( a ) for ; ( b ) for . : data from min and lueptow .the dotted lines are the fit eq .( [ fitml1 ] ) and ( [ fitml2]).,width=283 ] a best fit gives : on the same graph , we show as a function of for .a best fit gives : the influence of the radial circulation on the non - linear stability has not been systematically studied .however , we can get partial answers from the experiments of wendt and richard , where the influence of the top and bottom circulation on the onset of stability has been studied .both richard and wendt investigated the stability boundary with different boundary conditions .one boundary condition was with the bottom attached to the outer cylinder . in this case , the circulation is mainly in the anti - clockwise direction , with radial velocities outwards at the bottom ( ) .another boundary condition was with the bottom attached to the inner cylinder ( at rest ) . in that case , the circulation is in the opposite direction , with inward radial velocities at the bottom ( ) .a last boundary condition was intermediate between the two , with only part of the bottom attached to the outer cylinder . in neithercase , noticeable change of the stability boundary has been noticed , which means that at this aspect ratio , the radial circulation induced by the boundary conditions has an impact on the subcritical threshold reynolds number which is less than 10 per cent ( accuracy of the measurements ) .most of the experimental set - ups described in this paper have a very large aspect ratio .keplerian disks are characterized by a small aspect ratio .it would be interesting to conduct systematic studies of the variation of onto the stability and transport properties .the influence of onto the instability threshold , in the case of outer cylinder at rest has been computed by chandrasekhar , snyder .this is illustrated in fig .[ fig : aspect ] .the critical reynolds number is increased , as is decreased .it follows an approximate law : this behavior can be understood if one says that as becomes smaller , the smallest relevant length scale in the problem become instead of .the relevant reynolds number has thus to be corrected by a factor , hence , the law .however , another experimental study by park et al . suggests that the physical relevant length scale is instead of .a possible explanation of the difference is through the ekman circulation , which is present in experiments and not in numeric .this circulation may couple vertical and radial velocities , leading to an effective length scale .the only way to settle this issue is through smaller aspect ratio systematic laboratory and numerical experiments . :numerical data by chandrasekhar .the dotted line is a power law fit .,width=283 ] to close this section , it is interesting to consider the influence of additional physical forces that may be relevant to astrophysical flows . in the sequel , we only give a summary of the main experimental or theoretical results obtained , referring to the publications for more details .the influence of a vertical magnetic field on the stability of a taylor - couette flow has been studied theoretically and experimentally by donnelly and ozima using mercury .applications to astrophysics have been discussed by balbus and hawley .this motivated a lot of numerical work on this instability . for references , see e.g. . in the inviscid limit, the presence of a magnetic field changes the rayleigh criteria ( [ rayleigh ] ) .for example , in the case of a magnetic field given by , the sufficient condition for stability is now : therefore , anti - cyclonic flow , with are now potentially linearly _unstable _ in the presence of a magnetic field with no azimuthal and radial component .the linear instability in the presence of dissipation has only been studied numerically .a first observation was that boundary conditions ( e.g. insulating or conducting walls ) are relevant to determining the asymptotic behaviors .the proposed explanation is that the magnetic field makes the flow adjoin the walls for longer distances , so that the viscous dissipation remains comparable to the joule dissipation at all fields .a second observation is the importance of the magnetic prandtl number ( is the magnetic diffusivity ) on the instability . on general grounds, it seems that at small prandtl numbers , the magnetic field _stabilizes _ the flow in the supercritical case , while at large prandtl numbers , the magnetic field _ destabilizes _ the flow . in the subcritical case ,the magnetic field can excite a linear instability for anti - cyclonic flow , at any prandtl number .this is illustrated in fig .[ fig : lin - mag ] .scaling of critical reynolds number with magnetic prandtl numbers have been found : in the supercritical case , the critical reynolds number scales like . in the subcritical case ,the critical reynolds number scales like . : without forces , , numerical data from snyder ( 1968 ) . with vertical constant magnetic field , at ( ) and ( ) , ; numerical data from rdiger et al , 2003 ; : with vertical stratification , ; data from whithjack and chen ( 1974).,width=283 ] a vertical stable stratification added onto the flow plays the same role as a vertical magnetic field at low . in the inviscid limit, its presence changes the rayleigh criteria into ) .this means that all anti - cyclonic flows are potentially linearly unstable .the role of dissipation on the instability has been studied numerically ) and experimentally .it was found that stratification stabilizes the flow in the gspc regime , while it destabilizes it in the gsbc anti - cyclonic regime .the critical reynolds number was found to scale with the froude number ( ratio of rotation frequency to brunt vaissala frequency ) like , and to scale with the prandtl number ( ratio of viscosity to heat diffusivity ) like .a radial temperature gradient applied to the flow changes the stability . in the inviscid limit, the rayleigh criterion is modified by the radial temperature gradient into : where is the coefficient of thermal expansion and is the temperature difference between the cylinders . the last term in ( [ rayleighradial ] )induces an asymmetry between the case with positive and negative .an experimental study by snyder and karlsson helps to quantifying the role of dissipative processes .it was found that both positive and negative have a stabilizing effect when is small , and a destabilizing effect when is large .a more complete exploration of the parameter space would be welcome , since astrophysical disks are likely to be subject to this kind of stratification .these studies point out an interesting dissymmetry between the case ( cyclonic flows ) and ( anti - cyclonic flows ) . in many instances ,the regime of linear instability is _ extended _ by the large scale force into the whole domain . as a result ,in the anticyclonic regime one often has to deal with a competition between a linear destabilization mechanism induced by the large scale effect and the subcritical transition controlled by the self - sustained mechanism of the turbulent state .turbulent mean profiles have been measured recently for different reynolds number by lewis and swinney in the case with outer cylinder at rest .they observe that the mean angular momentum is approximately constant within the core of the flow : for reynolds numbers between and . at low reynolds number, this feature can be explained by noting that reducing the angular momentum is a way to damp the linear instability , and , thus , to saturate turbulence . at larger reynolds number, however , one expects the turbulence to be sustained by the shear in the same way as it is when there is no linear instability at all .accordingly , this constancy of the angular momentum is quite a puzzling fact .some understanding of this behavior can be obtained by observing that the mean profiles obtained by lewis and swinney are actually in good agreement with a profile obtained by busse upon maximizing turbulent transport in the limit of high reynolds number : this profile bears some analogy with the laminar profile , which reads : in the busse solution , the shear profile .this ratio is analog to the value observed at very large reynolds number in the non - rotating plane couette flow .it is therefore a clear signature of the shear instability , with no discernable influence of rotation , at least for the limited value of the rotation number ( of the order of ) considered by lewis and swinney .so it is interesting to test the busse asymptotic profile using other data , with different rotation number .this will be the purpose of the next section , where richard data will be used .we may however not conclude this section without noting an intriguing property of the busse solution . considering , we get from ( [ busse ] ) : so the condition ( `` linear stability of the turbulent profile '' ) is satisfied provided follows : that is , in the small gap limit , .as we shall see in the sequel , this is precisely the range of value where the torque is extremum . for the turbulent flow following the subcritical transition, we use the data of richard , collected for different reynolds numbers and rotation numbers .figure [ fig : profil - v ] displays typical turbulent mean profiles in both the cyclonic and anticyclonic cases , for comparison with the laminar and the busse profiles . : ( a ) : cyclonic case ( ) ; ( b ) : anti - cyclonic case ( ) .dotted line : laminar profile .continuous line : busse solution eq.([busse]).,width=283 ] one notices the profile tendency to evolve from the laminar one to the busse solution , even if they are still very far away from the extremizing solution . in order to evaluatehow fast the convergence occurs , figure [ fig : shear - redu ] displays the ratio of the turbulent mean shear to the laminar shear , both estimated at , i.e. , as a function of the ratio of the reynolds number to the threshold for shear sustained turbulence i.e. .one may indeed observe a tendency of shear reduction as the reynolds number increases , with a more rapid reduction for rotation number closer to .however , none of the case studied by richard approaches the value predicted by busse .it would be interesting to conduct higher reynolds number experiments at large value of the rotation number , to check whether rotation merely slow down the convergence towards the value , or change it into a number depending on the rotation number .: ; : ].,width=283,height=207 ] also one may notice that the decrease of with is much faster for cyclonic flows than for anticyclonic ones .figure [ fig : mean - prof ] may provide some hints on the origin of this dissymmetry .the first one is obtained by studying the radial variation of the ratio at a given , for different rotation number .this quantity provides the radial variation of the turbulent viscosity and thus is a good tracer of transport properties .one may observe an interesting tendency for cyclonic flow to display enhanced ( resp .depleted ) transport at the inner ( resp .outer ) core boundary , while anti - cyclonic flow rather displays depleted transport at the center , and enhanced transport at both boundaries .the second one is provided by the function : which may be viewed either as a local mean angular velocity exponent , or a local mean rotation number .this local exponent also plotted on figure [ fig : mean - prof ] , for different rotation number , at .one clearly observes a tendency towards constancy of this local exponent in the core of the flow and a bimodal behavior : cyclonic flow scatters towards while anti - cyclonic flow scatters towards .we have observed a persistence of this behavior at larger reynolds number ( up to at least ) . for ( a ) turbulent transport , traced by the ratio ;( b ) local rotation number . : ; : ; : ; : .data are from richard.,width=283 ]the turbulent transport can be estimated via the torque applied by the fluid to the rotating cylinders .traditionally , one works with the non - dimensional torque . for laminar flows, one can compute this torque analytically using the laminar velocity profile .it varies linearly with the reynolds number . when the turbulence sets in , the torque applied to the cylinders tends to increase with respect to the laminar case .a good indicator of the turbulent transport can then be obtained by measuring .as noticed by richard and zahn , most of the torque measurements available in the literature concern the case with the outer cylinder at rest ( see e.g. and references therein ) . in that case , we note that .an example of the variation of with reynolds number is given in figure [ fig : torque ] , in an apparatus with .one observes three types of behaviors : below a reynolds number , i.e. in the laminar regime . above , one observes a first regime in which varies approximately like a power - law , with exponent . in this regime ,taylor vortices can often be noticed .this regime continues until , where the torque becomes stronger , and the power - law steepens into something with exponent closer to .this regime has been observed up to the highest reynolds number achieved in the experiment ( of the order of ) .the experiment with inner cylinder rotating only covers flows such that . to check whether this kind of measurement is typical of torque behaviors in the globally supercritical case, one must rely on experiments in which the outer cylinder is also in rotation .unfortunately , the only torque measurements available in this case are quite older and not as detailed as in the case with inner cylinder rotating .most specifically , they do not extend all the way down to the transition region between laminar and turbulent .in several instances in which large reynolds number are achieved , however , one may observe a steepening of the relative torque towards the already observed in the case with inner cylinder rotating . on other measurements performed at lower reynolds numbers, the relative torque displays a behavior more closely related to the intermediate regime , with .altogether , this is an indication that in the globally supercritical case , the torque follows three regimes : g&~ & a re , re < r_c , + g&~&^sup re^3/2,r_c < re < r_t , + g&~ & ^sup re^2,re > r_t , + & = & , [ torquelinear ] where and are constants to be specified later the only measurements of torque in the subcritical case were performed by wendt and taylor in experiments with the resting inner cylinder , and rotating outer cylinder .wendt s experiments cover three different values of , taylor s cover eleven values of .taylor measurements cover sufficiently small value of reynolds number so that one can see that above a critical reynolds number , the torque bifurcates from the laminar value towards a regime in which the relative torque behaves like .an example is given in figure [ fig : torque ] .measurements by wendt at larger reynolds number display no evidence for an additional bifurcation .so , in the subcritical case , the torque presumably follows only two regimes : g&~ & a re , re < r_g , + g&~ & ^sub re^2,re > r_g , [ torquenonlinear ] where is a constant that we specify in the next subsection . as a function of the reynolds number . : super - critical case with outer cylinder at rest ; ; .data are from lewis and swinney . : sub - critical case , with inner cylinder at rest . ; .data are from wendt.,width=283 ] invoicing the continuity of the torque as a function of the reynolds number at the transitions allows to determine the prefactors , and . in the supercritical case ,one obtains : ^sup&=&r_c^-1/2 , + ^sup&=&^sup r_t^-1/2= , [ relationslinear ] and in the subcritical case : where is known through ( [ torquelam ] ) .this enables the knowledge of the torque as a function of and , or which then encode all the dependencies on and .this would be of great practical interest , and a posteriori gives all its importance to the work conducted in section [ stab ] , since torque measurements are usually more difficult to perform than thresholds estimations , especially when both cylinders are rotating .our argument is admittedly very crude , so it is important to test its validity on available data .figure [ fig : test - torque ] shows the comparison between the real non - dimensional torque measured in experiments , and the torque computed using only the critical reynolds number . at low reynolds number, there is a fairly large discrepancy but at large reynolds , the approximate formula provides a good estimate . as a function of the reynolds number , compared with its determination using critical reynolds numbers . : globally super - critical case with outer cylinder at rest . ; .( lewis and swinney ) . : globally sub - critical case , with inner cylinder at rest . ; ( wendt 1933 ) .short dashed line : with ; dot - dashed line : with ; long - dashed line : where .the critical reynolds numbers have been computed using results of section [ stab].,width=283 ] comparing ( [ relationslinear ] ) and ( [ relationnonlin ] ) suggest to introduce .this new threshold , defined in the supercritical case , would correspond to the reynolds number above which turbulence is sustained by the shear mechanism , and not anymore by the linear instability mechanisms .a physical basis for this expression could be given using the observation that the transition occurs in a turbulent state , where transport properties are augmented with respect to a quiescent , laminar case , in which all transport is ensured by viscous processes .this results in a _ delayed _ transition to the ultimate state , since the viscosity is artificially higher by an amount , where is the turbulent viscosity .using , we thus get from ( [ torquelinear ] ) and ( [ relationslinear ] ) an estimate of the relevant threshold as : at this stage of the analysis , , and respectively define a function of and on the intervals , and , where we recall here that and .further indication of the relevance of is provided by the continuity of this function with throughout the super / sub - critical boundaries .this is illustrated on figure [ fig : rgcontinuity ] , where the continuity is obtained on the cyclonic side between the tillmark s data ( ) and wendt s data ( ) and on the anticyclonic side between richard s data ( ) and wendt s data ( ) .as a function of and .anticyclonic side : : richard data ( ) ; : wendt data ( ) .cyclonic side : : tilmark data ( ) and fit as in section [ stab ] ; : wendt data ( ) ; : richard data ( ) . the lines are guides for the eyes to underline the continuity across the supercritical to subcritical domains for similar values of .,width=264,height=226 ] torque measurements described in previous section suggests that at large enough reynolds number , an `` ultimate '' regime is reached with quadratic variation with reynolds number .this suggests that _ in this regime _ , the interesting parameter is the ratio of the torque in any configuration , to the torque measured in a special case .because the case with resting outer cylinder is the most studied , it is of practical interest to choose this case as the reference , so that the relevant ratio is , where is the torque when only the inner cylinder is rotating . given the above subsections , is only a function of and given by , where is the generalized threshold defined in the previous section and displayed on figure [ fig : rgcontinuity ] .figure [ fig : relative - torque ] indeed shows the ratio , for different values of , as a function of the rotation number .the measurements for are direct measurements from the taylor and wendt experiments .the measurements for and are indirect measurements , coming from the experiment by richard , in which only critical numbers from stability were deduced . in that case, the torques have been computed using the results of previous section .all these results show that the non - dimensional torque behaves as : where is the torque when only the inner cylinder is rotating , and is the function of figure [ fig : relative - torque ] . as a function of and . , ( resp . ) : estimation from richard data ( ) , based on critical reynolds numbers , computed using results of section [ stab ] in the anticyclonic ( resp .cyclonic case ) ; , ( resp . ) : wendt data ( ) , the square size increasing with in the anticyclonic ( resp .cyclonic case).,width=264,height=226 ] this universal function is very interesting because it provides good insight about the influence of the rotation and curvature on the torque . for rotation number ,the torques are maximal and equal to the torque measured when only the inner cylinder is rotating . for rotation numbers outside this range ,torques tend to decrease , with a sharp transition towards a constant of the order of on the side . on the other side ,the transition is softer , with an approximate quadratic inverse variation until the smallest available rotation number . from a theoretical point of view, the asymmetry could be linked with the different stability properties of the flow on either side of the curve : for , the flow is linearly unstable , while it becomes liable to finite amplitude instabilities outside this range .the variation we observe can also be linked with experimental studies by jacquin et al , re - analyzed by dubrulle and valdettaro .they show that rotation tends to inhibit energy dissipation and observed simple power laws linking the energy dissipation with and without rotation as , where is a rotation number based on local shear and rotation .finally , the previous discussion shows that the knowledge of the torque in the case with resting outer cylinder as a function of and is an essential data to compute the torque in any other configuration .a theoretical model of the torque in that configuration has been proposed by dubrulle and hersant , in the case where the boundary conditions at the cylinder are smooth .it gives : g_i&= & , r_crer_t + g_i&=&0.33 , + & & re > r_t , + k()&= & 0.0001 , [ modeldh ] the quality of the fit can be checked on fig .[ fig : torque - rough.eps ] . for , the flow is laminar and the transport is ensured only by the ordinary viscosity . , , data from van den berg et al .; at , , data from cadot et al .. the continuous lines are the formula ( [ torquerough ] ) .case with two smooth boundaries at , .data from lewis and swinney .the dotted and the dashed - dot lines are the formulae ( [ modeldh]).,width=283 ] the link between torque and critical reynolds number has a powerful potential for generalization of the torque measurements performed in the laboratory for astrophysical or geophysical flows .indeed , all the additional complications studied so far ( aspect ratio , circulation , magnetic field , stratification , wide gap limit ) have been found to shift the critical reynolds number for linear stability by a factor function of this effect , like .depending on the situation , can be interpreted as either a change in the effective viscosity ( magnetic field ) , or a change in the effective length scale ( aspect ratio , wide gap ) . if , on the other hand , the scaling of the torque with reynolds number ( i.e. the shear ) remains non - affected by such a process , the computation done in section [ torque - out ] are easy to generalized through an _ effective reynolds _number .specifically , everything that has been said for the torque , in the ideal taylor - couette experiment , will still be valid with additional complication provided one replaces the reynolds number by an effective reynolds number , taking into account the stability modification induced by this effect .this principle is by no mean trivial and must be used with caution , even though it may appear as nothing more that an extension of the reynolds similarity principle .in fact , it has been validated so far only in the case with vertical magnetic field , where it has been indeed checked by donnelly and ozima that the torque scaling is unchanged by the magnetic field . in the sequel, we shall use this procedure in disks , because we noticed that it gave the most sensible results .it would however be important to check experimentally this `` extended reynolds similarity '' principle .experimental investigation of the taylor - couette flow with different set - up has shown that boundary conditions have an influence on the torque .more precisely , it has been shown that the inclusion of one or two rough boundary condition , in configuration with outer cylinder at rest , increases the torque with respect to the case with two smooth boundary conditions , at large reynolds numbers . in convective flows , a similar increase of transport propertiesis observed when changing from no - slip to stress - free boundary conditions . in both cases ,the increase occurs so as to increase the agreement between the observed value , and a value based on classical kolmogorov theory . a theoretical study of dubrulle explains this feature through the existence or absence of logarithmic corrections ( see formula [ modeldh]-b ) ) to scaling generated by molecular viscosity and large - scale velocity gradient in the vicinity of the boundary .obviously , in the presence of a rough boundary , or under stress - free boundary conditions , mean large - scale velocity gradients are erased near the boundary , and no logarithmic correction develops .for two rough boundary conditions , cadot et al measure for , while van den berg et al . for .the analogy with thermal convection suggest that depends on its laminar value and on and like using ( [ torquelam ] ) , and the experimental law , we find , so that : the comparison between this formula and the experiments is made in figure [ fig : torque - rough.eps ] . for reference , we also added the torque in the case of two smooth boundary conditions , as given by ( [ modeldh]-b ) .we do not have any theory for the case with asymmetric boundary conditions ( one rough , one smooth ) .laboratory experiments show that the torque lies in between the curve for two smooth boundary conditions and the curve for two rough boundary conditions .the exact location however depends on local conditions in a non - trivial way ( for example it is different when the rough conditions applies to the ( rotating ) inner cylinder or to the ( resting ) outer cylinder ) .the present experimental evidence therefore only allows the torque measurements with two smooth ( resp .two rough ) boundary conditions to be considered as lower ( resp .upper ) bounds for the torque , in case of complicated boundary conditions .torque measurements by went , for different geometry show that the circulation can have an influence on the transport properties .specifically , it has been observed that an outward circulation tends to increase the torque applied on the inner cylinder , while an inward circulation tends to decrease this torque .the difference can be quite important . at large reynolds numbers ,the relative increase of the torque can be computed as a function of .this is shown in fig .[ fig : rad - torque.eps ] .one observes a quasi - linear variation : the case with intermediate boundary conditions ( presumably close to zero ) lays about half way in between the two cases so that : as a function of for in the experiment of wendt , at large reynolds numbers .the squares are the data .the line is the fit eq .( [ fitwe2 ] ) ( b ) ratio of torque applied to outer cylinder vs. torque applied to the inner cylinder in the case of an outward circulation , .,width=302 ] close to the transition threshold , there is also an asymmetry between the two circulation regimes : outward circulation enhances the torque with respect to the laminar regime , while inward circulation decreases this torque ! this puzzling aspect has been explained by coles and van atta ; in absence of circulation , in stationary state , the torque exerted at the inner and outer cylinder must balance . in the presence of circulation , the transport of fluid toward or away from the plane of symmetry induces an imbalance of the two torques , which ceases to be equal .coles and van atta measured this imbalance as a function of the reynolds number for the case with inner cylinder at rest , at , and with boundary conditions favoring an outward circulation .one observes an imbalance of the order of 30 to 50 percent on fig .( [ fig : rad - torque.eps ] ) , with the torque on the inner cylinder being larger .these observations suggest the following model : in the presence of a radial circulation , the inner and outer torques are modified into : g_o()&=&g(=0)(1-(re , ) ) , + g_i()&=&g(=0)(-1-(re , ) ) , [ modele ] where is a positive function of and .so , when reversing the circulation ( going from to , the torque exerted at the inner cylinder decreases ( in absolute value ) , like in wendt data .moreover , these data indicate that at large reynolds number , the function becomes independent of .note also that according to this model , we should have . at , , the data of wendtprovide a value of for this ratio , in good agreement with the value observed by coles and van atta , see fig .[ fig : rad - torque.eps ] . in this model , the total torque is zero ( conservation of total angular momentum ) only when considering the torque applied by the circulation on the top and the bottom boundary .this means that in the presence of a radial circulation , a non - negligible torque is likely to apply at the vertical boundary .this observation may be relevant to astrophysical disks , and jet - like phenomena .the influence of a constant vertical magnetic field on the torque has been studied by donnelly and ozima .the measurements have been performed in the linear instability regime , with outer cylinder at rest .it is observed that an increasing magnetic field reduces the torque , so as to conserve the scaling observed at zero magnetic field ( section [ torque - super ] ) .the torque reduction is thus a function only of a non - dimensional magnetic field , and of .examples are provided in fig .[ fig : ozima - torque.eps ] , for gap sizes and and reynolds number . , the non - dimensional magnetic number .the symbols are the data .the lines are the fit ( [ reducmag ] ) .data are form donnelly and ozima . : , ; the constant for the fit are and ; : , ; the constants used for the fit are and .,width=321 ] the torque reduction can be quantified by the dimensionless number , where is the permeability , is the electrical conductivity and is the magnetic field in tesla .it seems to follow a simple law : where and are functions of the gap size .physically , this torque reduction may be due to the elongation of the cellular vortices which occurs as the magnetic field is increased .mathematically , the reduction can be understood using the connection between torque and critical number . in this framework, chandrasekhar observes that the addition of a magnetic field onto a flow heated from below imparts to the liquid an effective kinematic viscosity .only the component of the field parallel to the gravity vector is effective .this makes the critical reynolds number for stability proportional to . using the relation between the torque and the critical reynolds number in the linearly unstable regime ( eqs .( [ torquelinear ] ) and ( [ relationslinear ] ) ) , this leads to the scaling ( [ reducmag ] ) .the turbulent viscosity in the direction perpendicular to the shear can be estimated via the mean torque applied by the fluid to the rotating cylinders and the mean turbulent velocity profile .indeed , this torque induces a stress equal to : where is the area of a cylindrical fluid element at radius r , is the fluid density , and is the mean azimuthal viscosity .since a similar formula applies in the laminar case , with , one simply gets : using the expression of , and ( [ torquelam ] ) , we thus get the simple expression : here , we have adopted the notation of richard and zahn to express the turbulent viscosity in unit of the typical shear and radius of the flow as .this non - dimensional parameter encompasses all the interesting variation of the turbulent viscosity as a function of the radial position and the control parameters , ( or ) and . the radial variationis given through the ratio as illustrated in section [ mean - sub ] .this ratio is one near the boundary and may increase in the core of the flow , due to the turbulent shear reduction .all the variation with is through the function which has been empirically determined in section [ torque - out ] and plotted in fig .[ fig : relative - torque ] .all the variation with is through , which can be determined through torque measurements ( sections [ torque - super ] and [ torque - sub ] ) , with a theoretical expression provided in section [ torque - out ] for smooth boundaries , and [ torque - ext ] for rough boundaries .the dependence on the curvature is subtler since it appears in all the above dependencies .an example of variation of the dimensionless turbulent viscosity for is provided in fig .[ fig : turb - visc ] for smooth and rough boundary conditions .one sees that at large enough reynolds number , this function becomes independent of the reynolds number for rough boundary conditions , while it decreases steadily in the smooth boundary cases , due to logarithmic corrections .this weak reynolds number variation is in contrast with standard turbulent viscosity prescription , based on dimensional consideration _ la kolmogorov_. , , data from van den berg et al .. the continuous line is drawn using formula ( [ torquerough ] ) .case with two smooth boundaries at , , data from lewis and swinney .the dotted and the dashed - dot lines are drawn using formulae ( [ modeldh ] ) . , width=283 ] finally ,let us compare our results with previous results for the turbulent viscosity in rotating flows . using a turbulent closure model of turbulence ,dubrulle derived .this formula reflects the correct behavior in term of ( see section [ torque - thres ] ) but fails to reproduce the reynolds dependence in the case of smooth boundary conditions . for rough boundary conditions ,our formula predicts a turbulent viscosity going like for and for , in the wide gap limit .the formula of dubrulle is therefore in between these two predictions .richard and zahn used taylor measurements to derive the value .these measurements are performed for , with inner cylinder at rest . at ,one has and from fig .[ fig : relative - torque ] , . for adopt a value equal to , as suggested by section [ mean - sub ] . finally , from fig .[ fig : turb - visc ] , we get so that we finally obtain , an estimate close to the one proposed by richard and zahn .the present work provides us with a prescription for the turbulent viscosity hence the turbulent transport for the taylor - couette flow .this prediction clearly indicates the dependencies on the reynolds number and the rotation number .the curvature effect is much trickier to isolate at least with the available data , since it appears in all the terms of the prescription .especially , on the cyclonic side , where the rayleigh criterion depends on the curvature , it is impossible without any phenomenological arguments to isolate the curvature effect from the rotation one within the set of data used here . since we wanted to remain as close as possible to the existing data , we decided not to reproduce any phenomenological arguments in the present paper ( for such an analysis see e.g. longaretti and dauchot ) .the introduction of new control parameters , which rely on the dynamical properties of the flow , rather than on its geometry allows us to envision some application of our result to rotating shear flows in general even if one should remain cautious with the details of the boundary conditions .these new control parameters have a rather general ground , but they remain global quantities .it would be interesting to further develop this approach , by introducing local dynamical control parameters , so that in spatially developing flows , one could conduct a local study of the stability properties . in order to validate the above prescription, it would definitely be necessary to confront it to more experimental data .on the anticyclonic side , in the subcritical regime , only one value of the curvature has been investigated , so that we have very little idea of its influence . in the supercritical regime ,an important hypothesis made here was to introduce and to relate it to and .also , we have proposed to relate the torque measurements ( a difficult experimental task ) to the threshold determination .these conjectures should be checked against more data .finally , we have tried to provide some indications on the influence of external effects such as stratification , or magnetic fields .clearly the lack of experimental data here is such that very little could be done and a definitive effort should be conducted in this direction . still , it is to our knowledge the first time that using most of the existing experimental studies a practical prescription for the turbulent viscosity is proposed. it can certainly be improved , but we believe that , even at the present level , it can already bring much insight into the understanding of some astrophysical and geophysical flows .9999999 f. hersant , b. dubrulle and j - m .hur , turbulence in circumstellar disks " , _ a & a _ accepted ( 2004 ) .d. richard ._ instabilits hydrodynamiques dans les coulements en rotation diffrentielle ._ phd thesis .universit de paris vii ( 2001 ) .zeldovich , on the friction of fluids between rotating cylinders " , _ proc .london a _ , * 374 * , 299 ( 1981 ) .b. dubrulle , differential rotation as a source of angular momentum transfer in the solar nebula " , _ icarus _ , * 106 * , 59 ( 1993 ) .d. richard and j .- p .zahn , turbulence in differentially rotating flows " , _ astron ._ , * 347 * , 734 ( 1999 ) . plongaretti , on the phenomenology of hydrodynamic shear turbulence " , _ astrophys . j. _ , * 576 * , 587 ( 2002 ) .s. k. bahl , stability of viscous flow between two concentric rotating porous cylinders " , _ def .sci . j. _ ,* 20 * , 89 ( 1970 ) .k. min and r. lueptow , hydrodynamic stability of viscous flow between porous cylinders with radial flow " , _ phys .fluids _ , * 6 * , 144 ( 1994 ) . g. wendt , turbulente strmung zwischen zwei rotierenden konaxialen zylindern " , _ ing ._ , * 4 * , 577 ( 1933 ) .d. coles and c. van atta , measured distortion of a laminar circular couette flow by end effects `` , _ j. fluid mech . _ * 25 * , 513 , ( 1966 ) .longaretti , o. dauchot , ' ' rotation and curvature effects in subcritical turbulent shear flows " , submitted to physics of fluids ( 2004 ) .b. dubrulle and f. hersant , momentum transport and torque scaling in taylor - couette flow from an analogy with turbulent convection " , _ eur .j. b _ , * 36 * , 379 ( 2002 ) .a. prigent , b. dubrulle , o. dauchot and i. muttabazi , the taylor - couette flow : the hydrodynamic twin of rayleigh - bnard convection `` , accepted ( 2004 ) .d.d . joseph , `` stability of fluid motions '' , i , spriger tracts in natural philosophy , vol . 27 , springer verlag ( 1976 ) .o. dauchot , p. manneville , ' ' local vs. global concepts in hydrodynamic stability theory " , j. phys .ii , france , * 7 * pp . 371 - 389 ( 1997 ) .taylor , fluid friction between rotating cylinders " , _ proc .london _ , * a 157 * , 546 ( 1936 ) . n. tillmark and p.h .alfredsson,experiments on rotating plane couette flow " , in _ advances in turbulence vi , gavrilakis , l. machiels and p.a .monkewitz eds , kluwer academics publishers _ ,p391 , ( 1996 ) .a. prigent , g. grgoire , h. chat , o. dauchot and w. van saarloos , long wavelength modulation of turbulent shear flows " , _ phys .rev . letters _ , * 89 * , 014501 ( 2002 ) .lewis and h.l .swinney , velocity structure function , scaling and transitions in high - reynolds number couette)taylor flow " , _ phys rev .e _ * 59 * 5457 ( 1999 ) .m.m couette , etudes sur le frottement des liquides " , _ ann ._ , * 6 * , 433 ( 1890 ) .taylor , stability of a viscous liquid contained between two rotating cylinders " , _ phil .london a _ , * 223 * , 289 , ( 1923 ) .. donnelly and d. fultz , experiments on the stability of spiral flow between rotating cylinders " , _ proc ._ , * 46 * , 1150 ( 1960 ) .snyder , stability of rotating couette flow .ii . comparison with numerical results " , _ phys .fluids _ , * 11 * , 1599 ( 1968 ) .p. bradshaw , the analogy between streamline curvature and buoyancy in turbulent shear flow " , _ j. fluid mech ._ , * 36 * , 177 ( 1969 ) . t.j. pedley , on the instability of viscous flow in a rapidly rotating pipe " , _ j. fluid mech ._ , * 35 * , 97 ( 1969 ). d.k . lezius and j.p .johnston , roll - cell instabilities in rotating laminar and turbulent channel flows " , _ j. fluid mech . _ , * 77 * , 153 ( 1976 ) .v.a . romanov , `` stability of plane - parallel couette flow '' , _ functional anal ._ , * 7 * , 137 ( 1973 ) .s. chandrasekhar , the stability of non - dissipative couette flow in hydromagnetics " , _ proc ._ * 46 * , 253 ( 1960 ) .a. esser and s. grossmann , analytic expression for taylor - couette stability boundary " , _ phys .fluids _ , * 8 * , 1814 ( 1996 ) .lathrop , j. fineberg , h.l .swinney , transition to shear driven turbulence in couette - taylor flow " , _ phys .a _ , * 46 * , 6390 ( 1992 ) .o. dauchot and f. daviaud , finite amplitude perturbation and spot growth mechanism in plane couette flow " , _ phys. fluids _ * 7 * , 335 ( 1994 ) .k. park , g.l .crawford , and r.j .donnelly , determination of transition in couette flow in finite geometries " , _ phys .rev . letters _ , * 47 * , 1448 ( 1981 ) .velikhov , stability of an ideally conducting liquid flowing between cylinders rotating in a magnetic field " , sov . phys .jetp 9 , 995 , ( 1959 ) .donnelly and m. ozima , experiments on the stability of flow between rotating cylinders in the presence of a magnetic field " , _ proc .a _ , * 266 * , 272 , ( 1962 ) .s.a . balbus and j.f .hawley , a powerful local shear instability in weakly magnetized disks .i. linear analysis " , _ astrophys .j. _ , * 376 * , 214 ( 1991 ) .g. rdiger , m. schultz and d. shalybkov , linear magnetohydrodynamic taylor - couette instability for liquid sodium " , _ phys .e _ , * 67 * , 046312 ( 2003 ). a.p . willis and c.f .barenghi , magnetic instability in a sheared azimuthal flow " , _ a&a _ , * 388 * , 688 ( 2002 ) .l. howard and a. gupta,on the hydrodynamic and hydromagnetic stability of swirling flows " _ j. fluid mech._,*14 * , 463 ( 1962 ) .dubrulle , b. and knobloch , e. on instabilities in magnetized accretion disks " , _a&a_,*256 * , 673 ( 1992 ) .molemaker , j.c .mcwilliams and i. yavneh , instability and equilibration of centrifugally stable stratified taylor - couette flow " _ phy .rev . letters _ , * 86 * , 5273 ( 2001 ) . b. dubrulle , l. mari , ch .normand , f. hersant , d. richard and j - p .zahn , an hydrodynamic shear instability of stratified disks " , submitted to a & a ( 2003 ) .i. yavneh , j.c .mcwilliams and m. j. molemaker , non - axisymmetric instability of centrifugally stable stratified taylor - couette flow " _ j. fluid mech ._ , * 448 * , 1 ( 2001 ) .withjack and c.f .chen , an experimental study of couette instability of stratified fluids " , _ j. fluid mech ._ , * 66 * , 725 ( 1974 ) .b.m . boubnov and e. hopfinger,stability of the couette flow between two independently rotating cylinders in a stratified fluid " , _ physics - doklady _ * 42 * , 312 ( 1997 ) j - c chen and j - y .kuo , the linear stability of steady circular couette flow with a small radial temperature gradient " , _ phys . fluids _ ,* a2 * , 1585 ( 1990 ) .i. mutabazi , a. goharzadeh and f. dumouchel,the circular couette flow with a radial temperature gradient " in _ 12 international couette - taylor workshop ,september 6 - 8 , 2001 , evanston , il , usa _ , ( 2001 ) .snyder and s.f .karlsson , experiment on the stability of couette motion with a radial thermal gradient " , _ phys. fluids _ * 7 * , 1696 ( 1964 ) .busse , a property of the energy stability limit for plane parallel shear flow " , _ arch . rat .anals _ , * 47 * , 28 ( 1972 ) .bounds for properties of complex systems " , in _`` non - linear physics of complex systems '' , g. parisi , s.c .muller , w. zimmermann ( eds ) , lecture notes in physics , _ * 476 * , 1 ( 1996 ) .bounds for turbulent shear flows " , _ j. fluid mech ._ , * 41 * , 219 ( 1970 ) .l. jacquin , o. leuchter , c. cambon and j. mathieu ,_ j. fluid mech ._ , homegeneous turbulence in the presence of rotation " , * 220 * 1 ( 1990 ) .b. dubrulle and l. valdettaro , consequences of rotation in energetics of accretion disks " , _ a&a _ , * 263 * , 387 ( 1992 ) .van den berg , c.r .doering , d. lohse and d.p .lathrop , smooth and rough boundaries in turbulent taylor - couette flow " , _ phys .e _ , * 036307 * ( 2003 ) .o. cadot , y. couder , a. daerr , s. douady , a. tsinober , energy injection in closed turbulent flows ; stirring through boundary layers versus inertial stirring " , _ phys ., * 56 * , 427 ( 1997 ) .werne , j. , turbulent convection : what rotation has taught us ? " , in _ geophysical and astrophysical convection , eds fox and kerr , gordon breach _ , 221 , ( 1996 ) .b. dubrulle , logarithmic correction to scaling in turbulent thermal convection " , _ eur .j. b _ , * 21 * , 295 ( 2001 ). b. dubrulle a turbulent closure model for thin accretion disks " , _a&a_,*266 * , 592 ( 1992 ) .+ & + & any flow variable ( e.g. , component of velocity ) + & laminar part of + & mean part of + & fluctuating part of + & typical value of x + & relates to inviscid flows + & relates to plane couette flows + & relates to taylor - couette flows + & relates to subcritical flows + & relates to supercritical flows + & relates to cyclonic flows + & relates to anti - cyclonic flows + & inertial frame cartesian components of + & inertial frame cylindrical components of + & rotating frame cylindrical components of + + & + & position vector + & inertial frame velocity vector + & rotating frame velocity vector + & fluid pressure , generalized pressure + & angular velocity + & angular velocity of the rotating frame + & velocity shear + & ( plane couette flow ) + & ( taylor - couette flow ) + & specific angular momentum + & torque + & adimensionalized torque + + + & + & kinematic viscosity + & turbulent viscosity + & mass density + & + + & + & bounding plate velocity + & gap between bounding plates + & + + & + & inner , outer cylinder radii + & inner , outer cylinder angular velocity + & gap + & radius ratio ( dimensionless measure of the gap ) + & cylinders height + + + & + = d ] & unit of time + & reynolds number of moving plates ( plane couette flow ) + & reynolds number ( plane couette flow ) + & reynolds number of rotating cylinders ( taylor - couette flow ) + & reynolds number ( taylor - couette flow ) + & + + + & + = d ] & unit of time + & reynolds number + & rotation number + & rotation number at marginal stability ( ) + & curvature number + & + + + & + & local `` reynolds '' ratio + & local `` rotation '' ratio + & local `` curvature '' ratio + & + + & + & first supercritical linear transition + & minimal reynolds number for self - sustained turbulence + & transition to `` hard '' turbulence ( as traced by torques ) + + & + & cyclonic data + & anti - cyclonic data + & wendt ( 1933 ) data + & taylor ( 1936 ) data + & tillmark and alfredsson ( 1996 ) data + & richard ( 2001 ) data + & lewis and swinney ( 1999 ) data + +
this paper provides a prescription for the turbulent viscosity in rotating shear flows for use e.g. in geophysical and astrophysical contexts . this prescription is the result of the detailed analysis of the experimental data obtained in several studies of the transition to turbulence and turbulent transport in taylor - couette flow . we first introduce a new set of control parameters , based on dynamical rather than geometrical considerations , so that the analysis applies more naturally to rotating shear flows in general and not only to taylor - couette flow . we then investigate the transition thresholds in the supercritical and the subcritical regime in order to extract their general dependencies on the control parameters . the inspection of the mean profiles provides us with some general hints on the mean to laminar shear ratio . then the examination of the torque data allows us to propose a decomposition of the torque dependence on the control parameters in two terms , one completely given by measurements in the case where the outer cylinder is at rest , the other one being a universal function provided here from experimental fits . as a result , we obtain a general expression for the turbulent viscosity and compare it to existing prescription in the literature . finally , throughout all the paper we discuss the influence of additional effects such as stratification or magnetic fields .
polymer physics , with an old and venerable history , spanning more than 60 years , now occupies an important position in basic physics , providing conceptual support to wide varieties of problems .a polymer , from a physicist s point of view , is a set of units , called monomers , connected linearly as a chain .such polymers are the natural or synthetic long chain molecules formed by bonding monomers chemically as in real polymers or bio - polymers like dna , proteins etc , but they need not be restricted to those only . polymers could also be the line defects in superconductors and other ordered media , the domain walls in two dimensional systems and so on .even if non - interacting , a polymer by virtue of its connectivity brings in correlations between monomers situated faraway along the chain .this makes a polymer different from a collection of independent monomers .the basic problem of polymer physics is then to tackle the inherent correlations due to the long length of the string like object .a gas of isolated monomers at any nonzero temperature would like to occupy the whole available volume to maximize the entropy but that would not be the case when they are connected linearly as a polymer .this brings in a quantity very special to polymers , namely the equilibrium size of a polymer , in addition to the usual thermodynamic quantities .traditionally thermodynamic quantities , at least for large , are expected to show extensivity , i.e. , proportionality to the number of constituent units , but the size of a polymer in thermal equilibrium need not respect that . in other words ,if the length of a polymer is doubled , the size need not change by the same factor .consequently even the usual thermodynamic quantities would have an extra polymer length dependence which will not necessarily be extensive but would encode the special polymeric correlations mentioned above .how the equilibrium size of a polymer changes or scales , as its length is increased , whether this dependence shows any signature of phase transitions with any external parameter like temperature , and the consequent effects on other properties are some of the questions one confronts in the studies of polymers .the success of exact methods , scaling arguments and the renormalization group crafted the statistical physics approach to polymer physics into a well defined and recognized field .one of the first , and most successful , theoretical approaches to thermophysical properties of polymers , is the celebrated flory theory , that will be the central topic of this review .this simple argument was a key step in the history of critical phenomena , especially , in seeing the emergence of power laws and the role of dimensionality .for the special effects of long range correlations that develop near a critical point , one needs a fine tuning of parameters like temperature , pressure , fields etc , to be close to that special point .in contrast , the simple flory theory showed that a polymer exhibits critical features , power laws in particular , and a dimensionality dependence beyond the purview of perturbation theories , all without any requirement of fine - tuning .here is an example of self - organized criticality - a phenomenon where a system shows critical - like features on its own without any external tuning parameter - though the name was coined decades after the flory theory .various monographs , covered different aspects of methodologies and techniques .this notwithstanding , our aim is to bring out the nuances present in the flory theory and to place it in the current context , to appreciate why this theory stands the test of time as compared to other mean - field theories .this review is organized as follows . after a recapitulation of the basic facts of a noninteracting polymer and the simple flory theory in sec .[ sec : elementary ] , we introduce the edwards continuum model ( section [ sec : edwards ] ) and the mean field approximation to its free energy ( section [ sec : flory ] ) .this forms the basis for discussing the flory approximation through a saddle - point method ( section [ subsec : steepest ] ) .the results for the three regimes of a polymer ( swollen , theta and compact ) , and the transition behaviour can also be found in the same section . how the flory theory fares when compared with the current view of scale invariance , universality and scaling is discussed in sec .[ sec : flory - theory - modern ] and the role of a microscopic length scale discussed there .a few modifications , and a simple extension to include external forces applied to one extreme of the polymer , are discussed in sections [ subsec : elastic ] and [ subsec : inclusion ] , respectively . while the original flory theory describes the size at a fixed temperature , as the number of monomers increases , it is possible to go beyond power laws in the current framework .the analysis allows one to discuss the temperature dependence of the size at a fixed number of monomers ( assumed to be sufficiently large ) .this cross - over effect is discussed in section[sec : crossover ] .a particularly interesting case appears to be the two - dimensional case , discussed in section [ sec : explicit ] , where the scaling function can be computed exactly .section [ subsec : uniform ] also includes the uniform expansion method along with its relationship with a perturbative approach .besides the three states mentioned above , there is an obvious state of a polymer , namely a stretched or a rod like state .this state can be achieved by a force at one end , keeping the other end fixed , or by assigning a penality for bending . in absence of any interaction, there is no transition from this rod - like state to any of the other states .but still , for completeness , the universal features of the crossover behaviour needs to be discussed .this is done in the last part of the paper .it is devoted to the semiflexible chain , where bending rigidity competes with entropy .the response of the polymer when a pulling force is applied to an extremum is discussed in section [ sec : semiflexible ] , with an eye on the interpolating formula between flexible and semiflexible regimes .ancillary results for the structure factor and the end - to - end distance will also be presented in section [ subsec : structure ] .several technical issues are relegated to the appendixes .a few gaussian transformations that are frequently employed are listed in appendix [ app : hubbard ] .a discussion on the central limit theorem as applied to polymers and a possible deviation can be found in appendix [ app : distribution ] . in appendix[ sec : perturbation ] , the theoretical framework of perturbation theory , is introduced at the simplest possible level , and the lowest order calculation is explicitly performed to show how the method works . finally , for completness , appendices [ app : structure_gaussian ] , [ app : exact ] , [ app : green ] , [ app : integral ] include the explicit derivation of some results that are used in the main text .we end this introduction with a few definitions . if all the monomers , and therefore the bonds , can be taken as similar , then the polymer is called a homopolymer .if there is any heterogeneity either in monomers or in bonds , it will be a heteropolymer . in case of two types of monomers arranged in a regular pattern , the polymeris called a co - polymer .two different types of polymers connected together is an example of a block - copolymer .this review focuses on the homopolymer case only .we use the symbol to denote the dependence on certain quantities , ignoring prefactors and dimensional analysis , while the symbol is to be used for approximate equality .consider an isolated homopolymer formed by monomers at positions in space , and let be the monomer - monomer distance ( sometimes also referred to as the kuhn length ) .this is depicted in fig.[fig : fig1 ] .tethered spheres ( monomers ) at positions , with .the size of the monomers could be indicative of the excluded volume interaction of the monomers .( c ) a bead spring model where the harmonic springs take care of the polymer connectivity .( d ) continuum model - no details of the polymeric structure is important .the location of a monomer is given by a length .,scaledwidth=50.0% ] we further introduce the bond variable and the end - to - end distance , a flexible polymer is defined as one for which the bond vectors are completely independent so that each bond can orient in any direction in space irrespective of the orientations of the others .this freedom is expressed as an absence of any correlation between _ any _ two different bonds , that is this is the basis of the freely - jointed chain(fjc ) . as the monomer - monomer distance is fixed , the average in eq.([elementary : eq2 ] ) is an average over all possible orientations .this ensemble averaging is denoted by the angular brackets .a more realistic model , where there is an orientational correlation between successive bonds , called worm - like chain model ( wlc ) ( or kratky - porod model ) , is the paradigm of the stiff polymer , and will be discussed later on .a use of eqs.([elementary : eq1 ] ) and ( [ elementary : eq2 ] ) leads to so that the size , measured by the root mean square ( rms ) end - to - end distance of a polymer , depends on its length as with for the fjc .the exponent is called the size exponent .we are using the rms value as the size of the polymer because by symmetry ( i.e. isotropy ) . a judicious choice of origincan always remove a non - zero average of any probability distribution , whereas it would be impossible to make the variance zero .hence the importance of the rms value as a measure of the size .the behavior described by eq.([eq:2 ] ) can be also read as follows . if a sphere of radius is drawn with its center in a random position along the chain , the total length of the polymer contained in the sphere is about , with being what is known as the _ fractal dimension_. so , the fractal dimension of our non - interacting polymer is .the probability distribution of the end - to - end distance is a gaussian ( see appendix [ app : distribution ] for details ) and in it is ( see eq.[distribution : eq12 ] ) . \end{aligned}\ ] ] the standard deviation , that determines the width of this distribution , gives the rms size of eq . .a chain characterized by the gaussian behavior ( [ elementary : eq4 ] ) is also called an _ ideal or phantom _ chain .it also goes by the names of a gaussian polymer , a non - self - interacting polymer .these names are used interchangeably .the size of a polymer discussed above is an example of a critical - like power law whose origin can be traced to correlations .even - though the bonds are uncorrelated , the monomers are not . this can be seen from eqs . and as the positions of monomers and satisfy ^ 2 \right \rangle = \sum_{l , m = i+1}^{j } \left \langle { \bm \tau}_l \cdot { \bm \tau}_m\right \rangle = ( j - i ) b^2.\ ] ] generalizing eq ., the conditional probability density of monomer to be at if the -th monomer is at is given .\ ] ] the distribution becomes wide as increases and it is not factorizable .this is to be contrasted with the case of noninteracting monomers without polymeric connections .there this joint probability distribution is the product of the individual probability densities and hence devoid of any correlations .are correlated , i.e. if and only if .] the behaviour of an ideal chain as formulated here is purely entropic in origin because all the configurations are taken to have the same energy .if one generalizes eq.([elementary : eq2 ] ) by substituting by a general correlation which ( a ) depends only on , and ( b ) is such that , then the results , like , remain essentially the same , since eq.([elementary : eq3 ] ) is modified by a multiplicative constant . in this case , the decay length of the correlation gives the kuhn length .to go beyond the gaussian behaviour , let us introduce the repulsive interaction of the monomers , _e.g. _ , the athermal excluded volume interaction .the question is how this repulsion of the monomers affects the size of the polymer .does it just change the amplitude in eq .or it changes the exponent ?a change in the exponent needs to be taken more seriously than in the amplitude because the latter is equivalent to a change in the unit of measurement while the former changes the fractal dimension of the polymer .a simple way to accounting for the fact that non - consecutive spheres ( i.e. monomers ) can not interpenetrate , is provided by a hard - sphere repulsion , that is proportional to the excluded volume of each pair of monomers , times the number of monomer pairs ( ) per unit of available volume ( ) , that is the total free energy of the system can then be quickly estimated as follows . from eq.([elementary: eq4 ] ) is the entropy of the chain , number of chains of monomers and end - to - end distance ] ( see sec.[sec : continuum - model ] ) .although is a measure of the polymer length it is dimensionally like a surface , because of the fractal dimension ( ) mentioned in sec ii . since ={\sf{l}}^{-d} ] with the second term not contributing to the result of the radial integral . ] e^{-\beta h}.\end{aligned}\ ] ] the delta function in eq . maintains the fixed length constraint of each bond .it is this constraint that prevents the unwanted extensions ( in the tail of the gaussian distribution ) of a continuous chain .we choose our axes such that the force is in the -direction and the quantity of interest is the extension , where indicates the z - direction . by the way , the hamiltonian eq .is identical to a classical ferromagnetic one dimensional heisenberg model in a field , if is treated as a fixed length spin vector .let us first consider the small force regime , where a linear response is expected , , with the response function where the correlations are evaluated in the zero - force condition indicated by the subscript 0 . for the classical 1-dimensional model , these correlations decay exponentially for all temperatures , here and this also defines the perpendicular component as given in eq.([discrete : eq3 ] ) .the decay length is the persistence length .the correlation here may be compared with the flexible case , eq . .ignoring end - point effects ( equivalent to assuming a circular polymer ) , and converting the sum to an integral , we get . therefore for small forces , we expect for large forces , the polymer is going to align with the force and be completely stretched except for thermal fluctuations .the fully stretched condition means and therefore the delta function constraint in eq .is going to play an important role .the deviation from the fully stretched state comes because of transverse fluctuations and it would go to zero as . by writing with small transversal part , i.e. for , we have under the same approximation for as in eq ., the hamiltonian can be approximated , dropping redundant terms , as where , for all except . in the following ,we neglect this boundary effect and set . for a very large force ,the leading term of the hamiltonian is . by the equipartition theorem, we then expect . by using this result in eq . , the behavior is both eqs.([discrete : eq2bis ] ) and ( [ discrete : eq:3 ] ) agree with the small and large limits obtained by the more elaborate calculation of sec[subsec : large ] and sec.[subsec : marko ] .it is then possible to generate an interpolation formula that satisfies the two asymptotes , namely and .the interpolation formula is derived below _after _ taking the continuum limit which requires a more detailed evaluation of the large force limit. the continuum limit of the discrete chain with bending energy does not follow from the procedure adopted for the fjc .the reason for this is that in the edwards model the length is like an area or the chain is not a space curve .a semiflexible polymer configuration involves the tangent vectors for which it has to be taken as a space curve .therefore two points on the polymer and separated by a contour length has to satisfy .this condition at every point on the curve can be enforced by a -function in the partition function and the gaussian term of the edwards model does not appear . by writing , a continuum limit for the bending energy for would give a derivative of i.e. , . with the above introduction ,let us introduce the partition function and the free energy for a semiflexible chain under the action of an external force , \ e^{-\beta h},\end{aligned}\ ] ] with a hamiltonian in eq.([semi : eq2 ] ) is the persistence length which is the tangent tangent correlation length defined as ,\end{aligned}\ ] ] the continuum analog of eq . .the fact that the introduced in eq.([semi : eq2 ] ) coincides with the actual persistent length given in eq.([semi : eq3b ] ) will be shown below .if is a constant , then the last term in eq.([semi : eq2 ] ) becomes ] .this matches eq . with .the integral in eq . can be handled exactly .we use the following result where is a bessel function with the property , to obtain by making use of the expansion and the property of the gamma function , one obtains eq . [distribution : eq10 ] with . for the gaussian distribution , and its taylor series expansion around matches with eq .[ distribution : eq10 ] . here . the above derivation , a version of the central limit theorem , is valid only if , otherwise the expansion in eq .[ distribution : eq10 ] is useless .there are important distributions which may not have finite variances . in those cases ,a gaussian distribution is not expected .an example is the cauchy distribution with infinite mean and variance . , in eq ., for large , does not converge to a gaussian but to another cauchy distribution .the difference with the gaussian distribution lies mainly in the tail ( large behaviour ) of this distribution - the large behaviour of eq .[ distribution : eq13 ] is responsible for the divergent mean and variance .it is precisely for this reason , we do not consider such distributions in this review .our interest is in the behaviour of a polymer whose properties do not require special or exceptional contributions from very large sizes .instead of the flory approach that explores the large region directly , we here consider the small case which in principle can be handled in a perturbative way .the ultimate difficulty is in tackling the series which in most cases turns out to be asymptotic in nature .we go back to eq.([edwards : eq12 ] ) and expand the right - hand side in powers of ( for ) where the first order terms in the expansion of the two - body term shown in eq.([perturbation : eq1 ] ) is the delta function ensures that there is one contact along the chain .the series has the interpretation that the first term is the partition function without any concern about the interactions while the second term is the sum over all configurations that have one interaction along the chain .the calculation of the end - to - end distance also involves an expansion in , coming from both the numerator and the denominator .it is more or less straight - forward to calculate for the free case for generality , especially for higher order corrections , two possible procedures to compute the first order correction are discussed below .the convolution property of the gaussian distribution states that the probability of a gaussian polymer reaching at length can be written as a product of its being at any point at an intermediate length and then from to in the remaining length , with an integration over . with repeated use of the convolution property ,( [ laplace : eq4 ] ) .the relevant average required for the two - body correction term is eq .( [ eq:3pert ] ) has the interpretation of a polymer reaching at length from the origin and then returning to at length from where it goes to the desired endpoint . since could be any two points , there are integrals over each of them .the occurrence of is the signature of a loop formation that contains the main aspect of the polymer correlations because it involves contact of two monomers which may be nearby ( small ) or far - apart ( large ) along the chain .the eventual gaussian integrals can be done .however the integrals are divergent .the integrals over involve a term of the type which is divergent for .such divergent integrals can be handled by analytic continuation in by performing the integration where it is convergent and then analytically continued to other dimensions . if is such that the integral converges , then which can then be extended to all .the poles at and are responsible for the divergence at other values of .a similar expansion in can be performed for the end - to - end distance given by eq.([perturbation : eq4 ] ) , by collecting terms of similar order from both the numerator and the denominator . to first order the correction would look like with the use of eqs .( [ perturbation : eq2 ] ) and ( [ eq:2pert ] ) , and the standard results of gaussian integrals , the two -dependent terms can be written as so that we are left with the integral of eq .( [ gaussian : eq13 ] ) . with the analytic continuation, the end - to - end distance is given by ,\end{aligned}\ ] ] with as in eq . with .the divergence as is an important outcome of this perturbative analysis and its handling is part of the renormalization group machinery .the same result can be obtained by using the laplace - fourier approach .this method requires an integral over the length from zero to infinity and therefore may be called `` grand canonical '' compared to the approach of the previous section , which may be termed as `` canonical '' .the laplace - fourier transform is defined by along with its inverse as usual , in eq.([laplace : eq2 ] ) is a real constant that exceeds the real part of all the singularities of .we now go back to the expansion ( [ perturbation : eq1 ] ) that can be laplace - fourier transformed to obtain for simplicity , we limit here the discussion to the two - body interactions , but additional terms can be also considered .given that , the end - to - end distance can be computed from } { \int_{\gamma-\mathrm{i } \infty}^{\gamma+\mathrm{i } \infty } de \ ,e^{el } \widetilde{g_{e } } \left(\mathbf{k } \right ) } \right\}_{\mathbf{k}=\mathbf{0}}\end{aligned}\ ] ] the great advantage of the laplace - fourier transform is clearly that both the and convolutions appearing in eq.([laplace : eq5 ] ) can be decoupled so that that is ^ 2 \int \frac{d^d\mathbf{q}}{\left(2\pi\right)^d } \widetilde{g}_{e}^{(0 ) } \left(\mathbf{q } \right ) \end{aligned}\ ] ] where in eq .( [ laplace : eq10 ] ) we have included an term to keep all integrals convergent , with the understanding that the limit will be taken at the end of the calculation . the integral appearing in eq.([laplace : eq9 ] ) is given by let us now compute the first correction explicitly .eq.([laplace : eq2a ] ) yields ^ 2 + \ldots\end{aligned}\ ] ] where we have introduced the `` self - energy '' then we next note that because the green function is always a function of rather than the wavevector itself , we can write and using eqs.([laplace : eq14 ] ) , ( [ laplace : eq15 ] ) , and ( [ laplace : eq17 ] ) into eq.([laplace : eq3 ] ) one gets ^ 2 } } { \int_{\gamma-\mathrm{i } \infty}^{\gamma+\mathrm{i } \infty } de \ , \frac{e^{el}}{\left[e+ u i_d\left(e,\epsilon\right)\right ] } } \end{aligned}\ ] ]this completes the scheme for the solution . in ,the relevant integral ( [ laplace : eq11 ] ) reads where the integral has been extended to negative values by taking advantage of the parity of the integrand .the integral can be easily computed by contour method by extending the contour in the upper plane and noting that only two of the four poles are then included .these are to lowest order in , and .this produces the result once again , only the lowest correction in has been included .clearly the integral is divergent for but this divergence can be accounted for using a renormalizing procedure , as explained in ref. and they turn out to be irrelevant for the computation of the as it should . on dropping the dependent term in eq .[ first : eq2],this can be inserted into eq.([laplace : eq18 ] ) , that can then be expanded in powers of to first order .the result is } { \left [ \int_{\gamma-\mathrm{i } \infty}^{\gamma+\mathrm{i } \infty } de \ , \frac{e^{el}}{e } + u \frac{3}{2 \pi } \sqrt{\frac{6}{b } } \int_{\gamma-\mathrm{i } \infty}^{\gamma+\mathrm{i } \infty } de \ , \frac{e^{el}}{e^{3/2 } } + \ldots \right ] } .\end{aligned}\ ] ] all integralscan then be performed by using the result higher orders and additional details can be found in ref. . the final result has been quoted in eq .( [ first : eq5 ] ) .the size of a polymer is a geometric quantity which is generally not a conventional thermodynamic variable .however the discrete polymer model introduced here allows one to translate the polymer problem to a more familiar language for which one may associate standard thermodynamic quantities .the bond variables introduced in eqs . , and can be taken as spin like variables whose allowed orientations depend on the dimensionality and the topology of the space ( e.g. , continuum or lattice ) .the interactions of the monomers can also be expressed as interactions among the spins , not necessarily restricted to simple two spin interactions as in eq . .the polymer problem is then exactly equivalent to a statistical mechanical problem of a collection of spins at a given temperature .the response function of such a collection of spins is the susceptibility which measures the response of the total spin ( i.e. total magnetization ) to a uniform magnetic field .the end to end distance of the polymer turns out to be the total spin , as noted is sec [ subsec : discrete ] .the fluctuation - response theorem connects the susceptibility to the fluctuation of the total spin , ( see sec subsec : marko ) as and by symmetry , . therefore the susceptibility of the spin system , as a magnetic model , corresponds to the mean square end - to - end distance of the polymer . as a magnetic system ,the primary requirement is to have an extensive susceptibility which means for spins , at least for large .the stringent requirement of a thermodynamic limit as a magnetic model would enforce only the gaussian behaviour of the polymer .in contrast , the susceptibility per spin would behave as for the spin models that correspond to an interacting discrete polymer .interestingly , the polymer size exponent is linked to the finite size behaviour of the spin - problem as .this points towards the care needed in using thermodynamics and extensivity in polymer problems .consider the structure factor for a gaussian chain , we know that and hence = e^{-\frac{1}{2 } k^2 \frac{\left\vert i - j \right \vert}{d}}. \end{aligned}\ ] ] therefore we find where we have introduced the debye function dimensionally , is like an inverse of length and we see that the structure factor involves the dimensionless variable .the scale for is set by the overall size of the polymer , not its microscopic scales .the partition function eq.([discrete : eq1 ] ) can be solved exactly in the absence of the interaction term ( ) , when the model reduces to the freely jointed chain ( fjc ) . in this caseeach term of eq.([discrete : eq1 ] ) decouples and we can use the result so that the configurational partition function becomes ^n\end{aligned}\ ] ] introducing the physical force , we then have that this gives the well known result where the langevin function is defined as in the limit , eq.([exact : eq3 ] ) can be expanded and gives to leading order start from the following addition theorem where is the modified bessel function so that eq.([structure : eq8 ] ) becomes using the orthogonality relation eq.([green : eq2 ] ) reduces to in the limit we can use the asymptotic expansion for the bessel function for \end{aligned}\ ] ] to obtain that is the result given in eq.([structure : eq10 ] ) .note that in obtaining ( [ green : eq4 ] ) and ( [ green : eq6 ] ) , we have set and used the relation between the kuhn and the persistence length for the wlc model .to compute \rangle$ ] given in eq.([structure : eq12 ] ) , we need to compute the average quantity using the first two spherical harmonics and the orthogonality relations ( [ green : eq3 ] ) , eq.([integral : eq1 ] ) reduces after few steps to \end{aligned}\ ] ] that coincides with the expected result eq.([structure : eq6b ] ) , taking into account the other two components and . upon inserting this result into eq.([structure : eq12 ] ) , one can use a procedure so that ( ) ^ 2 \right \rangle & = & 2 \int_{{s}^{\prime}}^{s } d s_1 \int_{{s}^{\prime}}^{s_1 } d s_2 \frac{1}{3 } e^{\left(s_1-s_2\right)/l_p}= \frac{2}{3 } l_p^2 \left [ \frac{s - s^{\prime}}{l_p}-1+e^{-\left(s - s^{\prime}\right)/l_p}\right]\end{aligned}\ ] ] note that , within the de gennes mapping between spin models and polymers with the number of components going to zero , this quantity is , in fact a compressibility ( or a susceptibility ) . see v. j. emery phys . rev.b * 11 * 239 ( 1975 )
we review various simple analytical theories for homopolymers within a unified framework . the common guideline of our approach is the flory theory , and its various avatars , with the attempt of being reasonably self - contained . we expect this review to be useful as an introduction to the topic at the graduate students level .
within the financial industry forward rate curves play a central role in fixed - income derivative pricing and risk management .notwithstanding , such curves are not empirical or directly measurable objects but rather useful abstract concepts from where observed prices can be derived . moreover , given a finite set of market prices , we can construct in general an infinite number of compatible forward rate curves .to avoid such ambiguity several approaches have been proposed in the literature trying to capture a reasonable or natural functional form within the set of compatible possibilities . in historical orderthe first kind of methods proposed to solve this problem make use of the so - called parametric approach . in this approach a particular functional form for the forward rate curveis assumed leaving a certain number of free parameters to be fixed from the calculation of a given set of quoted prices .an extensive literature exists advocating for this approach. we can cite as examples the works of mcculloc , vasicek and fong , chambers , carleton and waldman , shea , nelson and siegel and more recently the works of svensson , fisher , nychka and zervos and waggoner .in most of these works we notice the privileged role played by polynomial and exponential splines as the preferred functional forms for the forward rate curves .the second kind of methods has been termed in the literature as non - parametric or maximum - smoothness approach . here instead of advocating for an a priori functional form for the forward rate a given measure of smoothness is chosen and then the forward rate curve is obtained as the one maximizing this measure subject to the constraints imposed by market prices .examples where these methods have been investigated include the works of adams and van deventer , delbaen and lorimier , kian guan lim , qin xiao et al , frishling and yamamura and yekutieli . in theseworks three different smoothing measures have been proposed .we also have the works of forsgren and kwon that generalize these methodologies and clarify the connection between splines and certain smoothness measures .finally we point out the work of wets , bianchi and yang that can be located somewhere in - between both approaches since here the number of functional parameters is finite ( albeit arbitrarily large ) and the functional behavior is restricted to a subfamily of curves .the purpose of this article is two - fold : firstly we want to present an efficient maximum - smoothing algorithm that handles the presence of spreads and implements the positivity constraint .secondly we want to investigate the predictive power of a linear combination of two quadratic measures , namely the one proposed by delbaen et al . and frishling et al . and the one by adams and van deventer . hereit is worth remarking that once the compatibility with market prices is fulfilled the only guiding principle that should be taken as definition of reasonable or natural is the predictive power and not other ad - hoc criteria . in this articlewe will only use as constraining data coupon bearing bonds .the inclusion of treasury bills , zero coupon bonds or bill futures is straightforward and amounts to adding the corresponding linear constraints . since our objective in this article is focusing on an algorithm dealing with non - linear constraints and inequality constraints ( spreads and positivity constraint ) we have not included such data .with these objectives in mind we organize the article as follows : in section [ sec algorithm ] we present the objective function that we will use throughout the article and we establish the basic notation . in this sectionwe present a sketch of the complete algorithm leaving the details for the appendices . in section[ sec results ] we present the results of the article including examples where the absence of the positivity constraint or the adequate spreads leads to negative rates . herewe present also a study of the predictive power of the one - parametric family of smoothness measures that include as extreme cases the measures used by delbaen et al ., frishling et al . and adams and van deventer . finally in section [ sec conclusions ] we present the conclusionsa bond , , is an instrument that gives future coupons , , at time stages and a final payment , we mean the complete last cash flow of bond , typically that includes a principal plus a last coupon . ] .the bond price , , can be determined from the discrete forward rate curve , , as follows where is the length of the time period between time stage and ( in our implementation we have used day ) .the objective function , or smoothing measure , is defined as a linear combination of the one used by delbaen et al . and frishling et al .( df ) and the one used by adams and van deventer ( ad ) the first term in eq.([objective ] ) ( df ) is a discrete approximation of the integral of the square of the first derivative of and the second term ( ad ) is a discrete approximation of the integral of the square of the second derivative of .this objective function is to be minimized subject to the consistency constraints where is to be determined along with and where we have used the definitions with the respective bid and ask prices of bond ( ) .note that the equality constraint given by eq.([eq_const ] ) is just eq.([bondprice ] ) rewritten taking logarithms and using the definitions ( [ definitions ] ) .eq.([ineq_constr ] ) introduces two inequality constraints .the first one is the positivity constraint over the forward rate curve and the second one is the requirement that the single price given by eq.([bondprice ] ) must lie in - between the bid and ask prices .we define the spread of bond as the quantity .we take the largest time to maturity in eq.([objective ] ) equal to the largest time to maturity in the constraining dataset , namely the constraints reflecting bond prices ( [ bondprice ] ) have been rewritten in a way such that they become linear when no coupons are present ( for a zero coupon bond ) .constraints given by eqs.([eq_const ] ) and ( [ ineq_constr ] ) are moved to the objective function defining -\mu\sum _ { r=1}^{n}\ln\left ( f_{r}\right ) -\tilde{\mu}\sum_{j=1}^{m}\left ( \ln\left ( \rho_{j}-\rho_{j}^{b}\right ) + \ln\left ( \rho_{j}^{a}-\rho _ { j}\right ) \right ) , \label{total_obj}\ ] ] with the lagrange multipliers and the logarithmic barriers with parameters and ( in the solution procedure we take , ) .the use of log barriers to deal with inequality constraints is a standard methodology in interior point methods for optimization problems .an explanation of this methodology adapted to our problem is given in appendix a. briefly the minimization algorithm is structured as follows: } \text { , } \tilde{\mu}^{\left [ 0\right ] } \text { and set ( } f,\rho\text { ) } = \text{(}f^{\left [ 0\right ] } , \rho^{\left [ 0\right ] } \text { ) } \nonumber\\ & \text{with the seed ( } f^{\left [ 0\right ] } , \rho^{\left [ 0\right ] } \text { ) satisfying the inequality constraints ( \ref{ineq_constr } ) .let } k=0.\nonumber\\ \text{step 1}\text { : } & \text{make a second order approximation of } z\text { at ( } f^{\left [ k\right ] } , \rho^{\left [ k\right ] } \text { ) ( see appendix a).}\nonumber\\[0.03 in ] \text{step 2}\text { : } & \text{determine the newton step ( } \hat{f}^{\left [ k+1\right ] } , \hat{\rho}^{\left [ k+1\right ] } \text { ) using dynamic programming ( see appendix b).}\nonumber\\[0.03 in ] \text{step 3}\text { : } & \text{modify ( } \hat{f}^{\left [ k+1\right ] } , \hat{\rho}^{\left [ k+1\right ] } \text { ) to get a solution ( } f^{\left [ k+1\right ] } , \rho^{\left [ k+1\right ] } \text { ) that satisfies the inequality}\nonumber\\ & \text{constraints ( \ref{ineq_constr } ) .update log barriers ( } \mu^{\left [ k\right ] } \rightarrow0\text { and } \tilde{\mu}^{\left [ k\right ] } \rightarrow0\text { as } k\rightarrow\infty\text { ) and}\nonumber\\ & \text{calculate the values of } w\text { and of constraints ( \ref{eq_const } ) .check if a termination criterion}\nonumber\\ & \text{is satisfied , otherwise let } k = k+1\text { and go to step 1 ( see appendix c ) . } \label{algorithm}\ ] ] computing times involved in step 2 are summarized in subsection b.1 .the solution typically stabilizes in approximately 6 iterations as can be seen in fig.([graph2 ] ) . on a pentium 4 , 2.4 ghz computerthe algorithm coded in c++ takes around 1/4 sec . to compute a forward rate curve like any of the ones seen in fig.([graph1 ] ) .( [ objective ] ) ) vs. newton iterations ( starting from the seeds : }=0.04/year ] ) .the plots correspond to the forward rate curves of fig.([graph1 ] ) .the converge behavior is affected by the termination criterion dictated by eq.([criterion ] ) and by the initial values of the log barriers dictated by eq.([values ] ) .note the exponential convergence of the algorithm in the first steps .note also that on friday 06 the minimal solution for ad measure is , for all practical purposes , a straight line . , width=566 ] year , ) while the ones in plot ( b ) correspond to the ad measure ( , year ) . in both plotswe have used a tolerance spread of 1% ( ) ., width=566 ]in this section we present some examples of the behavior of algorithm ( [ algorithm ] ) and we investigate the predictive power of the resulting forward rate curves . in fig.([graph1 ] ) we present a series of forward rate curves calculated using df and ad smoothing measures . therewe can see that for both measures the resulting curves share some similar traits like the positions of most peaks and dips .clear differences between both sets of curves are found at their end - points and in their behavior in the presence of high spreads . in the set of curves obtained from the df measure we have vanishing first derivative at the end - points and curves that tend to constants for high spreads .for the ad measure we have vanishing second and third derivatives at end - points and curves for high enough spreads given by straight lines . from the financial point of view these features are , in principle , just different aesthetic possibilities . in order to choose a particular measurethe guiding principles should be , in the first place , the fulfillment of the consistency constraints given by eqs.([eq_const]-[ineq_constr ] ) and after this is guaranteed the predictive performance .let us start now with the analysis of consistency .as can be found in if we do not consider the positivity constraint the local minima of objective ( [ objective ] ) are given by exponential splines with exponents or polynomial splines of order 2 or 4 when or are respectively zero .the main problem with these exponential or polynomial splines is that there is no warranty that they fulfill the positivity constraint .negative rates are not admissible in the absence of arbitrage opportunities and the risk of obtaining this unwanted feature is illustrated in fig.([graph3 ] ) . in this figurewe have concentrated on the swedish bond data on monday , july 09 , 2001 .there we have tested three spread patterns for both df and ad measures with and without the positivity constraint . from these plotsit is evident that the inclusion of spreads in the calculation of the forward rates is a necessary ingredient that can have a major impact in the resulting functional behavior .once we have an algorithm that insures the consistency of forward rates we can concentrate on the predictive accuracy of different measures .however , before starting the analysis of this issue let us make a brief digression to comment a point regarding measure ( [ objective ] ) . if we want to have both and different from zero and we want to compare the effects of each term it is important to realize that df and ad measures scale differently under changes of time units . in other words , and have different units . a practical way to define their units is to consider the objective as an adimensional quantity . by doing so and remembering that has units of inverse time , it is immediate to obtain that has units of time and units of time .the importance of keeping this in mind becomes apparent in results like the ones presented in figs.([graph4 ] ) and ( [ graph5 ] ) .figs.([graph4 ] ) and ( [ graph5 ] ) summarize our results regarding the predictive accuracy of the algorithm as a function there it is clear that the characteristic time span where df and ad compete is not the day or the century , but clearly the year . to construct these figureswe have calculated the forward rate curves for different values of when one bond is removed from the constraining dataset .the price of this missing bond is used afterwards as a benchmark to test the accuracy of the resulting curves .since we are interested in the statistical performance we have done such comparison for 335 consecutive trading days starting on wednesday , november 08 , 2000 and ending on thursday , march 07 , 2002 .we are also interested in studying the impact of spreads in the constraining dataset over the predictive accuracy . therefore we present our results for three spread patterns , namely constant spreads of 0% , 0.5% and 1% in the constraining dataset .fig.([graph4 ] ) concentrates on the predictive accuracy for the first 9 bonds of table ( [ table2 ] ) and fig.([graph5 ] ) presents the same analysis for the remaining 2 bonds of table ( [ table2 ] ) .these last 2 bonds are the ones with the larger maturities in the complete dataset . in particular for the last one with the largest maturity we have to decide upon the methodology to extrapolate the forward curve outside the range of the constraining dataset .therefore in fig.([graph5 ] ) we present the prediction accuracy using constant extrapolation from the last maturity in the constraining dataset and -generated extrapolation that consists in utilizing the -optimal forward rate curve even outside the range of the constraining dataset . and [ table2 ] ) and allowing for a large 5 % in the rest ( almost unconstraining this subset ) . doing so we observe in ( b1 ) and ( b2 ) curves that are still negative in the region where the nominal cash flows of the 0-spread bonds take place ., width=604 ] .these prediction errors are obtained removing the indicated bonds from the constraining dataset .the statistical sample comprise the forward curves corresponding to 335 consecutive trading days starting on wednesday , november 08 , 2000 and ending on thursday , march 07 , 2002 . within this set of dayswe calculate the relative errors of the predicted prices given by where is the actual price of the bond given by the market and is the price predicted using the prices of all other bonds in the dataset .hence in plots ( a1 ) , ( a2 ) and ( a3 ) we show the average of the absolute value of these errors and the plots ( b1 ) , ( b2 ) and ( b3 ) just their average .consecutive columns show averages obtained using spreads of , and in the constraining dataset .the set of predicted bonds shown in this figure is given by the first 9 bonds of table [ table2 ] .the remaining bonds of this table ( the last two bonds with the larger maturities ) are analyzed in fig.([graph5 ] ) .note how prediction errors tend to decrease when spreads are considered albeit not significantly ( compare with fig.([graph5 ] ) ) .note also that in the absence of spreads the df measure ( ) systematically exhibits a better performance than the ad one ( )).,width=566 ] ) but now the plots correspond to the last two bonds of table [ table2 ] that are the ones with larger maturities within our dataset ( we have also included the bond with the shortest maturity to facilitate the comparison with fig.([graph4 ] ) ) . for the bond with the largest maturitythe forward curve has to be extrapolated to reach its last cash flows ( see fig.([graph3 ] ) ) .we consider two extrapolation possibilities , one where we continue the forward rate curve from the last cash flow as a constant and the other where the curve is dictated by the minimization of functional even beyond the last cash flow in the constraining dataset .note that constant extrapolation gives better results than w - generated extrapolation in the region close to the ad measure ( ) . again like in fig.([graph4 ] ) we observe that in the absence of spreads the df measure ( ) systematically exhibits a better performance than the ad one .the most striking difference with fig.([graph4 ] ) is that here we observe that for these two long maturing bonds the introduction of spreads drastically reduce the prediction error all along the family of measures.,width=566 ]in this article we have presented a non - linear dynamic programming algorithm designed for the calculation of positive definite forward rate curves using data with or without spreads .we have included multiple details of the algorithm aiming at practitioners not familiar with the techniques of dynamic programming .we have illustrated the results of this algorithms using the swedish bond data for a one - parametric family of smoothness measures .the results and conclusions are the following : * the proposed algorithm calculates forward rate curves in real time and admits , without time or complexity penalizations , the use of any non - linear _ local _ objective function . since it also handles non - linear constraints it is possible to include within the constraining dataset any derivative products with prices bearing some dependence on forward rates .* this is the first algorithm proposed in the literature that implements the positivity constraint in the maximum smoothness framework . to the knowledge of the authorsthe only other work that implements such constraint outside this framework is the one of wets et al . .the proposal in has the advantage of using simple linear programming but do not consider the presence of spreads minimizing instead the sum over the _ modulus _ of the difference between calculated prices and market prices .* for the objective functions and constraining datasets like the ones we have used or more generally for the ones studied in the algorithm proposed in that reference offers better computing times at the expense of ignoring the positivity constraint .essentially the complexity in is and in ours is where is the number of time steps and the number of constraints .for that reason , when this class of objective functions and constraining datasets are used , a well coded algorithm might try first the proposal in ( improving its treatment of the spreads using e.g. log - barriers ) and later , only if the result is not positive definite , use our approach .* since the optimization problem we are solving is non - convex we can not discard the presence of several local minima ( this is independent of the presence of the positivity constraint ) .however , we have tested our algorithm starting from different seeds and in all cases we have arrived at the same minima .these tests included hundreds of searches with initial log - prices given by } = \rho_{j}^{b}+x\left ( \rho_{j}^{a}-\rho_{j}^{b}\right ) ] a flat random variable ) and initial forward rates given by several constant and oscillating functions . with this commentwe only want to convey our practical experience and by no means we intend to say that we have exhaustively explored the presence or absence of local minima . *it is clear that one way to avoid negative rates is just by increasing spreads by hand .the advantage of using real spreads at a given moment is that one can be sure of being consistent with market prices .if one observes in fig.([graph3 ] ) the large forward rate variations taking place for different spread patterns no doubts should remain about the relevance of a careful treatment of this issue . * from fig.([graph5 ] ) we conclude that the inclusion of spreads can remarkably improve the accuracy of resulting forward curves in the prediction of market prices of long maturing bonds . in it was pointed out that the inclusion of spreads notably improved the smoothness of the forward curve . to the knowledge of the authorsthis is the first time it is shown that their presence also improves the prediction accuracy .for that reason we believe that spreads should be considered even when the market data does not provide such information . in that casethe approach should consist in using a cross validation technique to asses the optimal spread minimizing an error criterion based in the prediction of market prices ( for cross validation methods see for example ) .one possibility is using the well know leave - one - out cross - validation to select the optimal spread much in the spirit suggested by figs.([graph4 ] ) and ( [ graph5 ] ) . given a constraining set of products ,the method actually consists in obtaining forward rate curves for a given spread , each curve leaving out one of the constraining products ( bonds in our case ) but using only the omitted products to compute an error criterion like thus we obtain a quantitative criterion to select an optimal level of spread .for example for our dataset and our family of models ( measures ) it can be conjectured from figs.([graph4 ] ) and ( [ graph5 ] ) that this optimal level of spread is typically around 0.5% . *figs.([graph4 ] ) and ( [ graph5 ] ) strongly suggest that for low spread patterns df measure is more accurate than ad one . for bigger spreads resultsdo not clearly favor any particular measure .j. m. acknowledges the financial support from _ tekniska hgskolan , inst .fr fysik och mtteknik _ ,linkping university , thanks j. shaw ( barclays ) for calling his attention to ref . , f. delbaen for ref . and j. m. eroles ( hsbc ) for proof reading the manuscript .j. m. would also like to thank dr . per olov lindberg for the hospitality during his stay at the division of optimization ; linkping university .the authors would like to thank the referees for their helpful comments on the manuscript .since the objective function in eq.([total_obj ] ) is a non - quadratic function of , we will use an iterative quadratic approximation ( newton steps ) to find its minima .we start with feasible seeds } ] fulfilling inequality constraints ( [ ineq_constr ] ) and we set up initial log - barriers coefficients } > 0 ] for newton iteration number we define } & : = f_{r}^{\left [ s-1\right ] } + \delta _ { r}^{\left [ s\right ] } , \nonumber\\ \hat{\rho}_{j}^{\left [ s\right ] } & : = \rho_{j}^{\left [ s-1\right ] } + \sigma_{j}^{\left [ s\right ] } , \label{dynamic_sol}\ ] ] with the hat over and indicate that at each step , } ] are the minima of the quadratic approximation and may not fulfill the inequality constraints ( [ ineq_constr ] ) . to assure constraints ( [ ineq_constr ] )are fulfilled a final redefinition } \rightarrow f^{\left [ s\right ] } , ] is necessary after each newton step .this redefinition is explained in appendix c. expanding up to second order in } ] we write } & : = \left .z\left ( \hat{f}^{\left [ s\right ] } , \hat{\rho}^{\left [ s\right ] } , \lambda^{\left [ s\right ] } \right ) \right\vert _ { o\left ( 2\right ) } \nonumber\\ & = \frac{1}{2}\delta^{\left [ s\right ] t}q^{\left [ s\right ] } \delta^{\left [ s\right ] } + \delta^{\left [ s\right ] t}b^{\left [ s\right ] } \lambda^{\left [ s\right ] } + \delta^{\left [ s\right ] t}c^{\left [ s\right ] } + \lambda^{\left [ s\right ] t}a^{\left [ s\right ] } \nonumber\\ & + \sigma^{\left [ s\right ] t}\lambda^{\left [ s\right ] } + \frac{1}{2}\sigma^{\left [ s\right ] t}m^{\left [ s\right ] } \sigma^{\left [ s\right ] } + \sigma^{\left [ s\right ] t}d^{\left [ s\right ] } + b^{\left [ s\right ] } , \label{quadratic}\ ] ] where } ] , } ] .we will use square brackets around newton step indices and parenthesis around dynamic programming ones .let us now work - out the matrices involved in the quadratic approximation . from eqs.([objective ] ) and ( [ quadratic ] ) we immediately obtain t}q^{\left [ s\right ] } \delta^{\left [ s\right ] } + \delta^{\left [ s\right ] t}c^{\left [ s\right ] } \\ & = \gamma\sum_{r=1}^{n-1}\left ( \frac{f_{r+1}^{\left [ s-1\right ] } -f_{r}^{\left [ s-1\right ] } } { \xi_{r}}\right ) \left ( \frac{\delta _ { r+1}^{\left [ s\right ] } -\delta_{r}^{\left [ s\right ] } } { \xi_{r}}\right ) \xi_{r}+\frac{\gamma}{2}\sum_{r=1}^{n-1}\left ( \frac{\delta_{r+1}^{\left [ s\right ] } -\delta_{r}^{\left [ s\right ] } } { \xi_{r}}\right ) ^{2}\xi_{r}\\ & + 4\varphi\sum_{r=2}^{n-1}\left ( \frac{1}{\left ( \xi_{r-1}+\xi_{r}\right ) } \left ( \frac{1}{\xi_{r}}f_{r+1}^{\left [ s-1\right ] } + \frac{1}{\xi_{r-1}}f_{r-1}^{\left [ s-1\right ] } \right ) -\frac{1}{\xi_{r}\xi_{r-1}}f_{r}^{\left [ s-1\right ] } \right ) \\ & \times\left ( \frac{1}{\left ( \xi_{r-1}+\xi_{r}\right ) } \left ( \frac { 1}{\xi_{r}}\delta_{r+1}^{\left [ s\right ] } + \frac{1}{\xi_{r-1}}\delta _ { r-1}^{\left [ s\right ] } \right ) -\frac{1}{\xi_{r}\xi_{r-1}}\delta _ { r}^{\left [ s\right ] } \right ) \xi_{r}\\ & + 2\varphi\sum_{r=2}^{n-1}\left ( \frac{1}{\left ( \xi_{r-1}+\xi_{r}\right ) } \left ( \frac{1}{\xi_{r}}\delta_{r+1}^{\left [ s\right ] } + \frac{1}{\xi _ { r-1}}\delta_{r-1}^{\left [ s\right ] } \right ) -\frac{1}{\xi_{r}\xi_{r-1}}\delta_{r}^{\left [ s\right ] } \right ) ^{2}\xi_{r},\end{aligned}\ ] ] and defining } & : = \sum_{j=1}^{m}\lambda_{j}^{\left [ s\right ] } \left [ \hat{\rho}_{j}^{\left [ s\right ] } -\ln\left ( v_{j}^{\left [ s\right ] } \right ) + \sum_{r=1}^{r_{n_{j}}^{\left ( j\right ) } -1}\hat{f}_{r}^{\left [ s\right ] } \xi_{r}\right ] , \label{cosntr}\\ y_{\mu}^{\left [ s\right ] } & : = -\mu^{\lbrack s-1]}\sum_{r=1}^{n}\ln\left ( \hat{f}_{r}^{\left [ s\right ] } \right ) , \label{barrier1}\\ y_{\tilde{\mu}}^{\left [ s\right ] } & : = -\tilde{\mu}^{\left [ s-1\right ] } \sum_{j=1}^{m}\left ( \ln\left ( \rho_{j}^{\left [ s\right ] } -\rho_{j}^{b}\right ) + \ln\left ( \rho_{j}^{a}-\rho_{j}^{\left [ s\right ] } \right ) \right ) , \label{barrier2}\ ] ] we have } & = \sum_{j=1}^{m}\lambda_{j}^{\left [ s\right ] } \left [ \rho_{j}^{\left [ s-1\right ] } + \sigma_{j}^{\left [ s\right ] } -\ln\left ( v_{j}^{\left [ s-1\right ] } \right ) + \sum _ { r=1}^{r_{n_{j}}^{\left ( j\right ) } -1}\left ( f_{r}^{\left [ s-1\right ] } + \delta_{r}^{\left [ s\right ] } \right ) \xi_{r}\right . \\ & \left .-\frac{1}{v_{j}^{\left [ s-1\right ] } } \sum\limits_{i=1}^{n_{j}-1}\sum_{r = r_{i}^{\left ( j\right ) } } ^{r_{n_{j}}^{\left ( j\right ) } -1}\alpha_{ij}\exp\left ( \sum_{z = r_{i}^{\left ( j\right ) } } ^{r_{n_{j}}^{\left ( j\right ) } -1}f_{z}^{\left [ s-1\right ] } \xi_{z}\right ) \delta_{r}^{\left [ s\right ] } \xi_{r}+o\left ( \delta_{r}^{\left [ s\right ] } { } ^{2}\right ) \right ] , \\ y_{\mu}^{\left [ s\right ] } & = -\mu^{\left [ s-1\right ] } \sum_{r=1}^{n}\left ( \frac{\delta_{r}^{\left [ s\right ] } } { f_{r}^{\left [ s-1\right ] } } -\frac{1}{2}\left ( \frac{\delta_{r}^{\left [ s\right ] } } { f_{r}^{\left [ s-1\right ] } } \right ) ^{2}+o\left ( \delta_{r}^{\left [ s\right ] } { } ^{3}\right ) \right ) , \\ y_{\tilde{\mu}}^{\left [ s\right ] } & = -\tilde{\mu}^{\left [ s-1\right ] } \sum_{j=1}^{m}\left ( \ln\left ( \rho_{j}^{\left [ s-1\right ] } -\rho_{j}^{b}\right ) + \ln\left ( \rho_{j}^{a}-\rho_{j}^{\left [ s-1\right ] } \right ) \right ) \\ & + \tilde{\mu}^{\left [ s-1\right ] } \sum_{j=1}^{m}\left ( \frac{1}{\rho _ { j}^{a}-\rho_{j}^{\left [ s-1\right ] } } -\frac{1}{\rho_{j}^{\left [ s-1\right ] } -\rho_{j}^{b}}\right ) \sigma_{j}^{\left [ s\right ] } \\ & + \frac{\tilde{\mu}^{\left [ s-1\right ] } } { 2}\sum_{j=1}^{m}\left [ \frac { 1}{\left ( \rho_{j}^{\left [ s-1\right ] } -\rho_{j}^{a}\right ) ^{2}}+\frac{1}{\left ( \rho_{j}^{\left [ s-1\right ] } -\rho_{j}^{b}\right ) ^{2}}\right ] \sigma_{j}^{\left [ s\right ] 2}+o\left ( \sigma_{j}^{\left [ s\right ] 3}\right ) .\end{aligned}\ ] ] thus defining {cc}1 & x\leq r\leq y\\ 0 & \mathrm{otherwise}\end{array } \right . , \qquad\delta_{i , j}:=\left\ { \begin{array } [ c]{cc}1 & i = j\\ 0 & \mathrm{otherwise}\end{array } \right . , \ ] ] from above expansions and eq.([quadratic ] ) we immediately obtain } & = \xi_{r}\chi\left ( r,1,r_{n_{j}}^{\left ( j\right ) }-1\right ) -\frac{1}{v_{j}^{\left [ s-1\right ] } } \sum \limits_{i=1}^{n_{j}-1}\alpha_{ij}\exp\left ( \sum_{z = r_{i}^{\left ( j\right ) } } ^{r_{n_{j}}^{\left ( j\right ) } -1}f_{z}^{\left [ s-1\right ] } \xi _ { z}\right ) \xi_{r}\chi\left ( r , r_{i}^{\left ( j\right ) } , r_{n_{j}}^{\left ( j\right ) } -1\right ) ,\\ a_{j}^{\left [ s\right ] } & = \rho_{j}^{\left [ s-1\right ] } -\ln\left ( v_{j}^{\left [ s-1\right ] } \right ) + \sum_{r=1}^{r_{n_{j}}^{\left ( j\right ) } -1}f_{r}^{\left [ s-1\right ] } \xi_{r},\\ q_{r , x}^{\left [ s\right ] } & = 4\varphi\left [ \left ( \frac{\left ( 1-\delta_{r,1}\right ) \left ( 1-\delta_{r,2}\right ) \xi_{r-1}}{\left ( \xi_{r-2}+\xi_{r-1}\right ) ^{2}\xi_{r-1}^{2}}+\frac{\left ( 1-\delta _ { r , n}\right ) \left ( 1-\delta_{r,1}\right ) \xi_{r}}{\xi_{r}^{2}\xi _ { r-1}^{2}}+\frac{\left ( 1-\delta_{r , n-1}\right ) \left ( 1-\delta _ { r , n}\right ) \xi_{r+1}}{\left ( \xi_{r}+\xi_{r+1}\right ) ^{2}\xi_{r}^{2}}\right ) \delta_{r , x}\right . \\ & + \frac{\xi_{r+1}\delta_{r+2,x}}{\left ( \xi_{r}+\xi_{r+1}\right ) ^{2}\xi_{r+1}\xi_{r}}+\frac{\xi_{x+1}\delta_{r-2,x}}{\left ( \xi_{x}+\xi _ { x+1}\right ) ^{2}\xi_{x+1}\xi_{x}}-\frac{\left ( 1-\delta_{r,1}\right ) \xi_{r}\delta_{r+1,x}}{\left ( \xi_{r-1}+\xi_{r}\right ) \xi_{r-1}\xi_{r}^{2}}\\ & \left . -\frac{\left ( 1-\delta_{x,1}\right ) \xi_{x}\delta_{r-1,x}}{\left ( \xi_{x-1}+\xi_{x}\right ) \xi_{x-1}\xi_{x}^{2}}-\frac{\left ( 1-\delta_{r , n}\right ) \xi_{r}\delta_{r-1,x}}{\left ( \xi_{r-1}+\xi _ { r}\right ) \xi_{r}\xi_{r-1}^{2}}-\frac{\left ( 1-\delta_{x , n}\right ) \xi_{x}\delta_{r+1,x}}{\left ( \xi_{x-1}+\xi_{x}\right ) \xi_{x}\xi_{x-1}^{2}}\right ] \\ & + \gamma\left [ \left ( 1-\delta_{r , n}\right ) \frac{\xi_{r}}{\xi_{r}^{2}}\delta_{r , x}+\left ( 1-\delta_{r,1}\right ) \frac{\xi_{r-1}}{\xi_{r-1}^{2}}\delta_{r , x}-\frac{\xi_{r}}{\xi_{r}^{2}}\delta_{r+1,x}-\frac{\xi_{x}}{\xi _ { x}^{2}}\delta_{r-1,x}\right ] + \frac{\mu^{\left [ s-1\right ] } } { f_{r}^{\left [ s-1\right ] 2}}\delta_{r , x},\\ c_{r}^{\left [ s\right ] } & = -\frac{\mu^{\left [ s-1\right ] } } { f_{r}^{\left [ s-1\right ] } } + \gamma\frac{f_{r}^{\left [ s-1\right ] } -f_{r-1}^{\left [ s-1\right ] } } { \xi_{r-1}}\frac{\left ( 1-\delta _ { r,1}\right ) } { \xi_{r-1}}\xi_{r-1}-\gamma\frac{f_{r+1}^{\left [ s-1\right ] } -f_{r}^{\left [ s-1\right ] } } { \xi_{r}}\frac{\left ( 1-\delta_{r , n}\right ) } { \xi_{r}}\xi_{r}\\ & + 4\varphi\left ( \frac{f_{r}^{\left [ s-1\right ] } -f_{r-1}^{\left [ s-1\right ] } } { \xi_{r-1}}+\frac{f_{r-2}^{\left [ s-1\right ] } -f_{r-1}^{\left [ s-1\right ] } } { \xi_{r-2}}\right ) \frac{\left ( 1-\delta _ { r,1}\right ) \left ( 1-\delta_{r,2}\right ) } { \left ( \xi_{r-2}+\xi _ { r-1}\right ) ^{2}\xi_{r-1}}\xi_{r-1}\\ & + 4\varphi\left ( \frac{f_{r+2}^{\left [ s-1\right ] } -f_{r+1}^{\left [ s-1\right ] } } { \xi_{r+1}}+\frac{f_{r}^{\left [ s-1\right ] } -f_{r+1}^{\left [ s-1\right ] } } { \xi_{r}}\right ) \frac{\left ( 1-\delta_{r , n}\right ) \left ( 1-\delta_{r , n-1}\right ) } { \left ( \xi_{r}+\xi_{r+1}\right ) ^{2}\xi_{r}}\xi_{r+1}\\ & -4\varphi\left ( \frac{f_{r+1}^{\left [ s-1\right ] } -f_{r}^{\left [ s-1\right ] } } { \xi_{r}}+\frac{f_{r-1}^{\left [ s-1\right ] } -f_{r}^{\left [ s-1\right ] } } { \xi_{r-1}}\right ) \frac{\left ( 1-\delta_{r,1}\right ) \left ( 1-\delta_{r , n}\right ) } { \left ( \xi_{r-1}+\xi_{r}\right ) \xi_{r}\xi_{r-1}}\xi_{r},\\ m_{j , k}^{\left [ s\right ] } & = \tilde{\mu}^{\left [ s-1\right ] } \left [ \frac{1}{\left ( \rho_{j}^{a}-\rho_{j}^{\left [ s-1\right ] } \right ) ^{2}}+\frac{1}{\left ( \rho_{j}^{b}-\rho_{j}^{\left [ s-1\right ] } \right ) ^{2}}\right ] \delta_{j , k},\\ d_{j}^{\left [ s\right ] } & = \tilde{\mu}^{\left [ s-1\right ] } \left ( \frac{1}{\rho_{j}^{a}-\rho_{j}^{\left [ s-1\right ] } } + \frac{1}{\rho_{j}^{b}-\rho_{j}^{\left [ s-1\right ] } } \right ) .\end{aligned}\ ] ]given positive definite symmetric matrices and and the -vector and -vector , we want to minimize the objective function subject to the set of constraints thus defining a new objective function for any we define{cc}1 & x\neq0\\ 0 & x=0 \end{array } \right . , \ ] ] where e.g. , for diagonal , tridiagonal matrices respectively .when and then can be solved efficiently with dynamic programming . to set up the notation and for those readers not familiar with dynamic programming let us here briefly explain the basics of this well known method .we start defining where satisfies for we use the inductive hypothesis where and .the final step of this backwards process consists in obtaining and satisfying from eqs.([extrema ] ) and ( [ hypothesis ] ) we obtain and plugging eq.([first ] ) into eq.([fundamental ] ) we obtain hence obtaining where{cc}1 & x\geq0,\\ 0 & x<0 .\end{array } \right.\ ] ] now from eq.([lagrange ] ) we obtain or and using eq.([g ] ) we obtain finally from and eq.([sig ] ) we obtain and then using eqs.([first ] ) and ( [ q]-[c ] ) we obtain moving forward in the index .the time necessary to compute all is proportional to to calculate the inverse of , that is a symmetric matrix , we require a computing time proportional to all and require , respectively , computing times proportional to finally the calculation of requires a computing time proportional to in the forward rate calculation typically we have ( in eq.([objective ] ) we have ) and therefore the maximum delay would be given by newton step obtained from eq.([dynamic_sol ] ) may not fulfill the inequality constraints ( [ ineq_constr ] ) . to satisfy such constraints we search for a } ] such that } + \alpha^{\lbrack s]}\delta_{r}^{\left [ s\right ] } \geq0,\qquad\rho_{j}^{b}\leq\rho_{j}^{\left [ s-1\right ] } + \alpha^{\lbrack s]}\sigma_{j}^{\left [ s\right ] } \leq\rho_{j}^{a}.\label{constr_alpha}\ ] ] in order to do this we first determine the maximum } ] satisfying eq.([constr_alpha ] ) .that is , given the sets of points } \leq0\right\ } , \\\tilde{a}_{\leq } & : = \left\ { j,\text { } \hat{\rho}_{j}^{\left [ s\right ] } \leq\rho_{j}^{b}\right\ } , \\ \tilde{a}_{\geq} & : = \left\ { j,\text { } \hat{\rho}_{j}^{\left [ s\right ] } \geq\rho_{j}^{a}\right\ } , \end{aligned}\ ] ] we have}=\min\left ( \min\limits_{r\in a}\left ( \frac { -f_{r}^{\left [ s-1\right ] } } { \delta_{r}^{\left [ s\right ] } } \right ) , \min\limits_{j\in\tilde{a}_{\leq}}\left ( \frac{\rho_{j}^{b}-\rho _ { j}^{\left [ s-1\right ] } } { \sigma_{j}^{\left [ s\right ] } } \right ) , \min\limits_{j\in\tilde{a}_{\geq}}\left ( \frac{\rho_{j}^{a}-\rho _ { j}^{\left [ s-1\right ] } } { \sigma_{j}^{\left [ s\right ] } } \right ) \right ) , \ ] ] and then we take}:=\beta\alpha_{\max}^{[s]},\ ] ] with ( in our implementation we have taken that is a standard election in the optimization literature ) .once we have } ] is a monotonically decreasing function satisfying . in our implementation we have taken of the form with , .the value of controls how fast barriers are reduced . in our implementationwe have found that adequate values for range .the value of controls the non - linearity of .we have found that the simple linear response provides good performance albeit convergence time is not significantly affected for in the range in this way we iterate the algorithm times until a given termination criterion is met .defining } & : = \ln\left ( w\left ( f^{[s]}\right ) \right ) -\ln\left ( w\left ( f^{[s-1]}\right ) \right ) , \\\epsilon^{\left [ s\right ] } & : = \max\limits_{j}\left ( \left\vert \rho _ { j}^{\left [ s\right ] } -\ln\left ( v_{j}^{\left [ s\right ] } \right ) + \sum_{r=1}^{r_{n_{j}}^{\left ( j\right ) } -1}f_{r}^{\left [ s\right ] } \xi_{r}\right\vert \right ) , \end{aligned}\ ] ] we have chosen the following termination criterion}<w_{zero}~~\mathrm{or}~~\left ( \left\vert \delta_{\ln w}^{\left [ s\right ] } \right\vert < \delta_{\ln w}^{\max}~~\mathrm{and}~~\left\vert \delta_{\ln w}^{\left [ s-1\right ] } \right\vert < \delta_{\ln w}^{\max}\right ) \right ] \right .\nonumber\\ & \left .\mathrm{and}~~n_{it}>n_{\min}~~\mathrm{and}~~\mu^{\left [ s\right ] } < \mu_{\max}~~\mathrm{and}~~\tilde{\mu}^{\left [ s\right ] } < \tilde{\mu}_{\max}~~\mathrm{and}~~\epsilon^{\left [ s\right ] } < \epsilon_{\max}\right\ } . \label{criterion}\ ] ] in our implementation we have taken } & = 10^{-1},\quad\mu_{\min}=10^{-10},\quad\mu_{\max}=10^{-6},\nonumber\\ \tilde{\mu}^{[0 ] } & = 10^{+1},\quad\tilde{\mu}_{\min}=10^{-10},\quad \tilde{\mu}_{\max}=10^{-6},\nonumber\\ \epsilon_{\max } & = 10^{-8},\quad\delta_{\ln w}^{\max}=10^{-2},\quad w_{zero}=10^{-9},\nonumber\\ n_{\min } & = 5,\quad n_{\max}=60 . \label{values}\ ] ] obviously there is considerable latitude to change the heuristic values assigned to the above parameters .let us finish this appendix making some comments regarding their robustness . is there to guarantee a minimum number of iterations so as to have } ] well defined and to avoid premature termination in the improbable case where the other criteria incorrectly suggest convergence . for this purposeis enough to take . serves as a maximum limit to secure termination even if convergence is not achieved and therefore an alarm should be provided whenever . from our experiencewe observe that is more than enough to take sets our precision to consider a given forward rate curve as a straight line .the order of magnitude of should be taken much lower than the typical order of magnitude of the observed optimal the value of the optimal depends not only on the constraining data but also on and we have adopted the practise of spanning the range year taking year , year and the range year taking year , year . with this conventionwe have found reasonable to take for the full range sets the maximum variation of between newton steps that is accepted before termination .note that in ( [ criterion ] ) to have ( } \right\vert < \delta_{\ln w}^{\max}~~\mathrm{and}~~\left\vert \delta_{\ln w}^{\left [ s-1\right ] } \right\vert < \delta_{\ln w}^{\max} ] and the barrier coefficients as indicators of convergence . on the contrary excessively reducing can generate unnecessary iterations .we have tested that for values satisfying we do not have any drastic increase in convergence time . controls the error in the constraints and is the most important parameter in ( [ criterion ] ) .a too large value of reduces the accuracy of the result and a too small value can give rise to unnecessary iterations .we have observed acceptable results for in the range and control the maximum allowed values for the logarithmic barriers implementing the positivity and spread constraints respectively .large values for these parameters can make log barriers to have a residual influence in the feasible region. acceptable values of these maximum weights range in and as explained above keeping parameters and positive guarantees that matrices and are positive definite which in turn is sufficient to guarantee the existence of an optimal solution in each iteration .hence and should be chosen as small as possible without interfering with the numerical stability of the algorithm . using double precision in our programwe have found the values given in ( [ criterion ] ) as a good compromise .finally } ] are the initial values for the log barriers coefficients .taking very large values for } ] increases the convergence time because we need more time to reduce the barriers .taking too small values for } ] also increases the convergence time because time is wasted exploring unfeasible solutions .moreover , we have observed that convergence and stability is improved if the contributions to the objective of the two barrier terms are kept balanced .this is achieved setting }\simeq\frac{n}{2m}\mu^{\lbrack0]} ] for } ] ( see eq.([update ] ) ) . keeping }\simeq\frac { n}{2m}\mu^{\lbrack0]} ] in the range }\lesssim10^{+1}.$ ]in this work we have used the following data tables and conventions .[ c]|l||l|l|l|l|l|l|l|l|l|l|l| ( so ) & 1033 & 1042 & 1035 & 1044 & 1038 & 1037 & 1040 & 1043 & 1034 & 1045 & 1041 + & 4.86 & 4.92 & 5.06 & 5.15 & 5.26 & 5.27 & 5.355 & 5.4 & 5.395 & 5.46 & 5.655 + & 4.905 & 4.965 & 5.11 & 5.2 & 5.05 & 5.325 & 5.4 & 5.455 & 5.23 & 5.51 & 5.7 + & 4.885 & 4.945 & 5.095 & 5.185 & 4.93 & 5.295 & 5.395 & 5.435 & 5.24 & 5.495 & 5.68 + & 4.865 & 4.92 & 5.06 & 5.15 & 4.93 & 5.275 & 5.355 & 5.41 & 5.415 & 5.465 & 5.65 + & 4.835 & 4.885 & 5.025 & 5.125 & 4.93 & 5.25 & 5.355 & 5.405 & 5.405 & 5.465 & 5.66 + & 4.84 & 4.89 & 5.045 & 5.145 & 4.93 & 5.27 & 5.38 & 5.43 & 5.43 & 5.495 & 5.69 + & 4.825 & 4.87 & 5.035 & 5.13 & 4.93 & 5.25 & 5.36 & 5.415 & 5.41 & 5.475 & 5.66 + & 4.805 & 4.85 & 5.015 & 5.11 & 4.93 & 5.23 & 5.34 & 5.395 & 5.395 & 5.455 & 5.64 + & 4.8 & 4.86 & 5.005 & 5.1 & 4.93 & 5.22 & 5.335 & 5.38 & 5.395 & 5.445 & 5.64 + & 4.77 & 4.83 & 4.97 & 5.06 & 4.93 & 5.165 & 5.27 & 5.34 & 5.34 & 5.39 & 5.585 + the price of the bond is calculated using the formula where is the nominal amount ( sek 40 millions for all bonds in table [ table1 ] ) , is the coupon rate ( given in table [ table2 ] ) , is the total number of remaining coupons ( each paid at time ) , is the quoted rate given in table [ table1 ] and is a time difference between and the settlement day .this time difference is calculated according to the isma 30e/360 convention defined as follows : given two dates and , their isma 30e/360 time difference is given by [ c]|c||c|c| & & + & 05/05/2003 & 10.25 + & 15/01/2004 & 5 + & 09/02/2005 & 6 + & 20/04/2006 & 3.5 + & 25/10/2006 & 6.5 + & 15/08/2007 & 8 + & 05/05/2008 & 6.5 + & 28/01/2009 & 5 + & 20/04/2009 & 9 + & 15/03/2011 & 5.25 + & 05/05/2014 & 6.75 + kian guan lim , qin xiao , and jimmy ang , _ estimating forward rate curve in pricing interest rate derivatives _ , derivatives use , trading & regulation , an international journal of the futures and options association uk , vol .6 no.4 , pp .299 - 305 , 2001 .kwon , oh kang , _ a general framework for the construction and the smoothing of forward rate curves_. qfrg , university of technology , sydney .http://www.business.uts.edu.au/finance/qfr/cfrg_papers.html , march 2002 .
in this article we present a non - linear dynamic programming algorithm for the computation of forward rates within the maximum smoothness framework . the algorithm implements the forward rate positivity constraint for a one - parametric family of smoothness measures and it handles price spreads in the constraining dataset . we investigate the outcome of the algorithm using the swedish bond market showing examples where the absence of the positive constraint leads to negative interest rates . furthermore we investigate the predictive accuracy of the algorithm as we move along the family of smoothness measures . among other things we observe that the inclusion of spreads not only improves the smoothness of forward curves but also significantly reduces the predictive error .
a very elegant theory linking attenuation and dispersion is presented for viscoelastic media with positive relaxation spectrum .all the attenuation and dispersion functions compatible with the theory are represented by simple integral representations . the attenuation function and the dispersion function are both expressed as transforms of a positive measure ( the dispersion - attenuation measure ) .the measure is arbitrary except for a very mild growth condition .these expressions can be considered as a dispersion relation in parametric form .in contrast the acoustic kramers - kronig dispersion relations are non - local .they express the dispersion function in terms of the attenuation function or conversely .this presupposes that one of these functions ( usually the dispersion function ) is very accurately known and consistent with the basic assumptions of the theory . on the other hand , substituting any positive measure in the parametric dispersion relation yields an admissible ( compatible with the theory ) dispersion and attenuation .the origin of the kramers - kronig dispersion relations is unclear . in electromagnetic theorythey follow from the causality of the time - domain kernel representing the dielectric constant in the dispersive case . in acoustics they follow from an ad hoc assumption about the analytic properties of the wave number .it is namely assumed that the wave number is the fourier transform of a causal function or distribution .the physical meaning of the causal function or distribution is unclear hence the justification of the acoustic kramers - kronig dispersion relation is missing .various inequalities imposed on the complex continuation of the wave number can not be expressed in terms of a constitutive assumption .hence the acoustic kramers - kronig dispersion relation are an ad hoc addition to the constitutive equation , often incompatible with it . it will be also shown that in media with non - negative relaxation spectrum the frequency dependence of the attenuation function in the high frequency range is sublinear . in the case of power law attenuation the attenuation and dispersion are proportional to a power of frequency . numerous experiments in acoustics indicate that the power - attenuation law accurately represents the frequency dependence of dispersion and attenuation over several decades of frequency .the theory based on positive relaxation spectrum implies that .this seems to contradict the experimental ultrasound investigations of numerous materials which point to higher values of the exponent in the power law attenuation .for example in ultrasound investigations of soft tissues the exponent varies between 1 and 1.5 , while in some viscoelastic fluids such as castor oil it lies between 1.5 and 2 .we shall refer to this case as superlinear frequency dependence .typical values of the power - law exponent in medical applications using ultrasound transducers are in bovine liver for 1100 mhz , in human myocardium and other bio - tissues . in castor oil at ca 250 mhz .values in the range 12 are observed at lower frequencies in aromatic polyurethanes .nearly linear frequency dependence of attenuation is well documented in seismology .approximately linear frequency dependence of attenuation has been observed in geological materials in the range 140 hz to 2.2 mhz .several papers have been devoted to a theoretical underpinning of the superlinear dispersion - attenuation models . in order to resolve some problems chen andholm suggested to add a fractional laplacian of order of the velocity field to the usual laplacian of the displacement field in the equations of motion .their paper still leads to an unbounded sound speed for and adds a new problem : the equation of motion does not have the form of a viscoelastic equation of motion .sublinear power law attenuation ( ) has also been reported in experimental investigations .non - power law attenuation laws are usually derived from the constitutive laws .investigations of creep and relaxation in viscoelastic materials always support the assumption of positive relaxation spectrum ( e.g. for creep in metals , for the upper mantle with ) and therefore models derived from constitutive relations exhibit sublinear attenuation and dispersion at high frequencies . in particular thisapplies to the cole - cole , havriliak - negami and cole - davidson and kohlrausch - williams - watts relaxation laws commonly applied in phenomenological rock mechanics , polymer rheology , bio - tissue mechanics ( e.g. for bone collagen ) as well as for ionic glasses .another abnormal feature of wave propagation in media with superlinear power laws ( and more generally in media with superlinear asymptotic frequency dependence ) is appearance of precursors .the precursors extend to infinity and thus the speed of propagation of disturbances is infinite .finite speed of wave propagation requires that in the high - frequency range the exponent of the power law does not exceed 1 .it is a very challenging problem how to explain the incompatibility between the theory and experiment in the superlinear case .it seems likely that the attenuation observed at ultrasound frequencies significantly differs from the asymptotic behavior of attenuation at the frequency tending to infinity .one might surmise that the frequency range of ultrasound measurements is still far below the asymptotic high frequency range in which different mechanisms are at play .this suggests studying models of attenuation with a slowly varying power - law exponent .in viscoelasticity the relaxation modulus , defined by the constitutive stress - strain relation is assumed to have positive relaxation spectral density .the latter statement means that for every where is the inverse of the relaxation time and .. represents a superposition of a continuum of debye elements . for mathematical convenience eq .will be replaced by a more general equation where is a positive measure : ) \geq 0 ] of the positive real axis . as indicated in the subscript of the integral signthe range of integration is the set of reals satisfying the inequality . in generalthe measure of the one - point set is finite and equal to the equilibrium modulus .an additional assumption ensures that is integrable over ] : the function is non - decreasing and right - continuous : ) = \lim_{\varepsilon \rightarrow 0 + } \mu([0,r+\varepsilon]) ] is a bernstein function if and its derivative is completely monotone .a bernstein function is non - negative , continuous on ,\infty[ ] is a complete bernstein function if ( i ) has an analytic continuation to the complex plane cut along the negative real axis ; ( ii ) , ( iii ) , ( iv ) in the upper half plane .theorem [ thm : j ] has the following corollaries : ( 1 ) if is a cbf then is a cbf if .+ ( 2 ) if then the function is a cbf . every complete bernstein function has the integral representation : ,\infty[\ ; } \frac{\nu(\dd r)}{x + r}\ ] ] where and is a positive measure satisfying the inequality . if is a completely monotone function integrable over ] , the inequality and eq .imply that and therefore .the limit implies that , where denotes the wave front speed .the last conclusion follows from the fact that for and the right - hand side is integrable with respect to the measure in view of eq . . for the integrand of ,\infty[\; } \frac{\nu(\dd r)}{p + r}\ ] ] tends to zero , hence , by the lebesgue dominated convergence theorem , the integral tends to zero as well .it follows additionally that ] for .the attenuation function and the dispersion function satisfy linear dispersion equations in parametric form : ,\infty[\ ; } \frac{\vert p \vert^2 + r \ , { \mathop{\mathrm{re}}}p } { \vert p + r \vert^2 } \nu(\dd r)\\ \mathcal{d}(p ) = -{\mathop{\mathrm{im}}}p \int_{]0,\infty[\ ; } \frac{r}{\vert p + r\vert^2 } \nu(\dd r)\end{gathered}\ ] ] the measure represents the dispersion - attenuation spectrum .an elementary dispersion - attenuation is represented by the function for a fixed value of .since , where is the creep compliance , we can express the green s function in the form of a convolution where is the inverse laplace transform of the function . in terms of the inverse fourier transformation note that .the integrand is square integrable if and if eq. holds then the paley - wiener theorem ( theorem xii in ) can be applied . by this theorem for if and only if is a causal function .hence , if for then vanishes for .( solid line ) and ( dashed line).,scaledwidth=75.0% ] in particular , if , , , eqs and are ensured by the inequality .a precursor appears for , as can be seen from fig .[ fig:1 ] .for the -stable probability can be expressed in terms of airy functions , see . ]there is no wavefront and the peak is preceded by a precursor extending to infinity .the limit case , as we already know , is the asymptotic behavior and it entails unbounded propagation speed . for a general viscoelastic medium with a positive relaxation spectrumwe note that ,\infty[\ ; } \frac{\nu(\dd \xi)}{\xi - \ii \omega } = \omega^2 \int_{]0,\infty[\ ; } \frac{\nu(\dd \xi)}{\xi^2 + \omega^2}\ ] ] if the total mass of is finite then by the lebesgue dominated convergence theorem . in the general case the inequality valid for and the lebesgue dominated convergence theoremimply that ] , the residuum at contributes if .hence and the solution of in a three - dimensional space is given by the formula hence where the function is a totally skewed lvy stable probability density .in the case of power - law attenuation and for hence ( , sec . 4.44 )the integrals are uniformly convergent for and the derivatives exist for all positive integers and .all the properties of the dispersion - attenuation function were derived from the fact that is a complete bernstein function .it follows from the above property of that is a complete bernstein function .it does not follow from here that is a also complete bernstein function .therefore need not be a completely monotone function .a counterexample is provided by the power law attenuation model . if then the relaxation modulus is not completely monotone but the attenuation function satisfies eq . .the latter condition is satisfied for the power law attenuation if and only if ( fig . [ fig:0 ] ) . for the power - law attenuation .the exponent values are , from bottom to top . ]the spectral measure of the dispersion - attenuation function , , is the laplace transforms \ , \dd y\\ y^{-\alpha } = \int_0^\infty \e^{-y z } \ ,\left[z^{\alpha-1}/\gamma(\alpha)\right]\ , \dd z\end{gathered}\ ] ] hence which implies eq . .a large number of papers have been devoted to the implications of the kramers - kronig dispersion relations for the wave number .the kramers - kronig dispersion relations would follow from the assumption that the function is the fourier transform of a causal function or causal tempered distribution .a priori the function has no physical meaning and the assumption of causality of is unwarranted .causality of would however be justified by the assumption that and the equation of motion has the following form in this case .is however incompatible with the viscoelastic constitutive equation . in a viscoelastic equation of motionintegral operators should act on the laplacian of .in the authors try to guess the viscoelastic constitutive equation compatible with .their approach involves an approximation of a spatial derivative by a temporal derivative .it is however possible to avoid an approximation by shifting the dispersive terms on the left - hand side of to the right - hand side .the laplace transform of the left - hand side of eq .is ^ 2/c_0^{\;2}\ ] ] assuming that .hence has the form {,x}\ ] ] where is the inverse laplace transform of ^{-2}\ ] ] expression is the laplace transform of a completely monotone function if and only if ^ 2/p ] increases to its maximum value if ( `` abnormal dispersion '' ) .for it increases from at zero frequency to infinity at a finite frequency ^{1/(\alpha-1)}$ ] and changes sign .this behavior is clearly unphysical .for phase speed decreases from at to 0 at infinite frequency ( `` normal dispersion '' ) .the limits on the dissipation - attenuation exponent actually apply to the asymptotic value of at infinity .is it possible that the experimentally measured power law behavior applies to the middle frequency range ? define the variable dissipation - attenuation exponent as the function so that .this definition has a major flaw : a singularity at .the exponent decreases to at and restarts from to decrease towards its asymptotic values .the simplest examples of dispersion - attenuation functions with variable exponent are and is a complete bernstein function because is obviously a complete bernstein function .moreover and . hence is an admissible dispersion - attenuation function . in order to prove that is a complete bernstein function we need theorem [ thm : j ] .we now note that . for in the upper half - plane ,hence and is a complete bernstein function .since and , is admissible as a dispersion - relaxation function . in the first case , , the attenuation exponent and from below as . in the second case and increases from to .it is thus likely that the exponent assumes a larger value in the high frequency range .thus a value of the exponent above 1 in the middle frequency range is not likely .the class of admissible dispersion and attenuation functions can be characterized by a class of radon measures .the theory applies only to sublinear attenuation growth in the high frequency range .superlinear growth of the attenuation function in the high frequency range is incompatible with the assumption of positive relaxation spectrum underlying theoretical and experimental viscoelasticity .superlinear growth of attenuation also implies that the phase speed is unbounded for high frequencies and the main signal is preceded by a precursor of infinite extent .
it is shown that the dispersion and attenuation functions in a linear viscoelastic medium with a positive relaxation spectrum have a sublinear growth rate at very high frequencies . a local dispersion relation in parametric form is found . the exact limit between attenuation growth rates compatible and incompatible with finite propagation speed is found . incompatibility of superlinear frequency dependence of attenuation with finite speed of propagation and with the assumption of positive relaxation spectrum is demonstrated . * keywords : * viscoelasticity , wave propagation , dispersion , attenuation , bio - tissues , polymers * list of symbols . * + [ cols= " < , < , < " , ]
brownian motion is the random movement for some nanoscopic particles in a fluid . it is named in honor of robert brown who described it in 1827 .the random movement of these particles is the result of constant bombardment of his surface by fluid molecules under a thermal agitation .atomic bombardment at this scale is not always completely uniform and have large statistical variations .the pressure on one side can vary , causing the movement observed [ 4 ] .the mathematical description of the phenomenon was developed by albert einstein [ 1 ] .einstein found a way to confirm the atomic nature of matter observing the relationships between the macroscopic diffusion coefficients d and the atomic properties of matter .this relationship is : since r is the gas constant , na avogadro number , t is temperature in kelvin , the viscosity , `` a '' the radius of the brownian particles and d is the diffusion coefficient of the material suspended in the liquid [ 1,2].the theory of brownian motion was developed in order to describe the dynamic behavior of particles whose mass and size are much larger than the rest in the medium in which they are .einstein was succeeded in proving that the average movement of the brownians particles in one direction is an expression such as : in this work we present a new simple method to simulate the diffusive behavior of particles that interact in a fluid , we control the statistical characteristics of the particles movement by simples integers sums .we observed only the brownians particles in the simulation .each particle is localized in a two - dimensional network and they have for each time - step one movement direction .this direction will be chosen randomly by our algorithm .the simulation represents a percentage of total particles immersed in fluid , showing the phenomenon of self - diffusion .we perform calculations for the ( srampd ) in order to compare with the experimental values obtained for the diffusion coefficient in [ 1 ] .we calculate the standard deviation ( sd ) of the values obtained for the ( srampd ) in 50 simulations as a measure of the dispersion of the simulated data .also we make simulations changing the concentration of particles fixing the network size , and changing the size of the network for a fixed concentration . in both caseswe calculate the behavior of the ( srampd ) and the ( sd ) .our model use two networks represented as two matrix ( mi and me ) .each number inside on the matrix cell s in `` mi '' represent the position in `` me '' matrix .this number may change for each time step and determine the particle position in the spatial matrix me .the supra - index of each cell in mi represents the name of each particle in me .( see figure 1 ) .this method will be called `` reticular matrix mapping '' or mmr for his spanish acronym . in each timestep each particle will move randomly in one of the possible directions .each particle will have 4 possible movement directions for the two - dimensional case and 5 if we take the possibility of non - movement . to perform this movement , foreach particle in mi one may sum a integer , for each time - step .this sum may possible the particles movement to other position . to move any particle to the rigth side we need to add ( + 1 ) at the corresponding mi cell, this will move the particle to the right side in me matrix .if we sum ( -1 ) we will move the particles to the left side , ( -n3 ) for up , ( + n4 ) for down and ( + 0 ) for non - movement . n1 and n2 are the matrix components for mi and n3 , n4 the matrix components in me . as an example : we take the particle 6 in position 13 , if we add ( + 5 ) to the particle 6 we will move the particle from position 13 to 18 in `` me '' ( see figure 2 ) .our particle systems have restrictions to moving up , down , right , left and no movement .we can assign a percentage of movement for each direction .every time - step each particle will have the chance to move in a direction that will be chosen randomly .we assign a weight of probability for each of his possible directions . in this case for our propose , all direction must have the same probability weight to correspond a random movement in all directions like in a brownian movement or a 2 dimensional random walk .( 20% up , down , right , left and non moving ) .this method allows us to stadistastically control the particles movement . in the figure 4we show snap - shots for particles which moves randomly with different weight porcentage . in this case, the particles will move preferentially to the bottom part , like high density particles in a classic fluid , subject only to gravity forces , i.e. a more higher probability to move in a `` down '' direction .the simulation showed in the figure 4 correspond with weight porcentage probabilities : down 30% , up 20% , left 20% , right 20% and non - movement of 10% in 1000 steps simulation .we can appreciate a clear trend for all particles to stay in the bottom part of the picture .a mechanism in the code prevents particles with the same number in the mi matrix , avoiding particles in the same position . for the boundary conditionsare established rules on the border cells to determine the type of boundaries that we want . in our case , we choice as boundary conditions rigid - walls enclosing all particles .in order to compute ( srampd ) we take the initial and final position for all particles in the network .the number of particles is given by `` n '' sub - index .the initial position are represented by `` i '' sub - index and the final position by the `` f '' sub - index . where n = n1xn2 is the number of particles in mi . using the familiar expression for the two - dimensional distance between two points ; summing and dividing over all particles we obtain the ( srampd ) in the simulation : to comparate ( srampd ) in the simulation , whit the experimental ( srampd ) we need to establish a scale parameter associating in this , the simulation scale with the real scale .this expression will be given by the product between the ( srampd ) in the simulation and the scale factor : if n is equal to in accordance with the kinetic theory of gases , water at , , the particles diameter 0,001 mm we have that in the x direction is 0.8= and the displacement of a half minutes will be 0.6 [ 1 ] .this value is independent of the number of experiments made it and its a invariable value .if we take the ( srampd ) simulation value : =10.3159 and divide between the experimental value of in [ 1 ] .we can calculate the value of using ( 4 ) .obtained in 50 simulations with the same parameters mi 30x30 = 900 particles , me 100x100=10.000 cells in 200 time - steps .the probability for each movement direction was 20% for each one : up , down , right , left and not moving .the mean of in the computational experiments carried was =10.3159 cell units and the standard deviation was 0.0152 in the same units.,scaledwidth=80.0% ] values for different network sizes with the same particles concentration .the particles concentration was fixed at 20.25% .if me = 6400 = 80x80particles , and mi=36x36=1296 particles the concentration will be 20.25% .each data obtained for was the average of 50 simulations .each simulation was performed in 200 time - steps with the same characteristics .the probability for each motion direction was 20% up , down , left and right and non movement.,scaledwidth=80.0% ] .these values represent a measure of data dispersion in the simulation.,scaledwidth=80.0% ] figures 6 and 7 show the behavior of when we vary the network size maintaining the same particles concentration 20.25% for each simulation .this concentration is a measure between the total number of cells in me and the number of particles in mi .each data obtained come from a average of 50 simulations performed with the same characteristics for ; 200 time - steps and the movement probability for each direction was 20% : up , down left and right and non movement .the results in figure 6 shows a similar behavior to hardy and pomeau [ 7 ] they measured the average time of free flight in a network , fixing the concentration and increasing size of the network .his results are comparable with our result however our method propose a direct way to change the movement statistical characteristic in the simulation and a new simulation technique .figure 7 shows how the scattering of data decreased as the network gets bigger , larger networks produce less dispersion in data .the dispersion value in the data are small if we compare whit the size of the network and the values of ( srampd ) .the minimum was approximately 0.02 cell units and the maximum 0.14 cells units .values varying the concentration of particles and fixing the size of the network me=100x100=10.000cells .each measure is the average obtained from 50 simulations for , each simulation was performed under the same conditions 20% probability for each movement direction : up , down , left and right does not move in 200 time - steps.,scaledwidth=80.0% ] .these values represent a measure of the dispersion data in the experiments.,scaledwidth=80.0% ] graphs 8 show the behavior of the ( srampd ) when we varying the concentration of particles in the simulations and fix the network size .we obtained data for 18 different concentrations ranged from 1% to 90% of occupancy .each point corresponded to the average value of in 50 simulations ; each simulation was performed in 200 time - steps with the same movement probability : up , down , left right and non movement 20% .the network size for all cases was 100x100 cells or 10.000 possible positions .we see as the value of ( srampd ) decreases when we increasing the particle concentration .that is attributed to the fact that the movement of particles depends of the numbers of available possibilities in his neighbors , if we have a more higher numbers of particles , the possibilities will be decrease changing the particles mobility .the values of ( sd ) remained in a relatively small variations if we compared with the size of the network and the simulation magnitud of ( srampd ) .the properties on invariability obtained by einstein in [ 1 ] for the compute of the ( srampd ) and the respective comparison with the diffusion coefficient in a fluid , give us a prove of the randomness characteristic of the brownian movement phenomenon .this is the same principle used in random walks in 2d .no matter how many times does the experiment we always get the same results .we represent this situation with our new method mmr and we obtained the same results for many experiment .the graphics 6 show us this result with a little dispersion proved by the smalls values of ( sd ) . for 50 experimentswe obtain a dispersion of 0.0152 cells units and the mean in 50 simulations was =10.3159cells units . in order to compare with real experiments showed in [ 1 ]we propose a scale parameter .we perform simulations changing the size of the network and fixing the concentration , and fixing the network size and changing the concentration . in the first case we observe an increase of and a soft decline in the dispersion . in the second casewe observe a decline of caused by the increasing number of particles . for too many particlesthey lose possibilities to occupy positions and mobility in general decrease .the method allows us to implement a so higher complex situation , and this work represents only ones of the more simples experiment .we will go to performe simulations in : the particles movement having preferences to move in one special direction , simulations taking in a count the collisions between particles , 3d networks , fluid particles moving in a porous media , and so other situations .
we propose a new model based on cellular automata technique to simulate the behavior of brownian s particles with restrictions . in our model each particle will moves randomly for each time - step in each of his possible directions for a two - dimensional network . the movement of each particle is the result of collisions with other particles and this interaction determines the movement in one direction ; they can only occupy one cell at time . we take only a representative percentage of fluid particles . we calculate the square root of the arithmetic mean of particles displacements ( srampd ) in order to make a comparison with the diffusion coefficient in an adiabatic fluid . we also observe the behavior of the ( srampd ) when we change the network size of the simulation for a fixed concentration and when we change the particles concentration for a fixed network size .
gravitational n - body simulations deal with the motions of the many bodies ( particles ) interacting with other particles by gravitational force and are used for solving astronomical problems : formation of stars and galaxies .there are basically two types of n - body systems : collisional and collision - less systems . in a collisional system ,a number of particles is relatively small and the orbit of the particles is significantly deformed by force from the nearby particles . in a collision - less system ,a number of particles is large and the effect from near particles is relatively small .also it does not necessary require highly accurate force calculation .a most simple algorithm for calculating forces ( or acceleration ) between these bodies is a direct algorithm that calculates interactions of all pair of particles .however , we can reduce the calculation complexity by barnes - hut tree algorithm that approximates forces from many source particles as force from one source particle by tree structure for particles .the calculation complexity of the tree algorithm is , but accuracy of force is worse at the expense of the approximation .as the tree algorithm , many techniques for reducing calculation time of n - body simulation have been developed so far .however , it is necessary to further speed - up the calculation for large - scale simulations .we usually speed - up n - body simulations by parallel computing using message passing interface ( mpi ) along with acceleration techniques such as graphical processing units ( gpu ) . nowadays, gpu is used for not only graphic processing but also general purpose processing .gpu enables us to accelerate n - body simulations by running the tree algorithm on it . as an approach for further reducing calculation cost of the tree algorithm, we can extend particle - particle particle - tree(pppt ) algorithm .pppt algorithm is a hybrid of direct and tree algorithms for collisional simulation . in the method, we split gravitational force into short - rage and long - range force .the accurate direct algorithm is used for calculating short - range force while we use the tree algorithm for calculating long - range force .we apply different time integration methods for the two parts of the force .accordingly , we only adopt high accuracy methods for short - range force and can reduce the cost of unimportant ( distant and weak ) force calculation . in this paper , we show a new algorithm based on pppt scheme for reducing calculation and communication cost of parallel n - body simulations .we evaluated the performance of our algorithm on gpu clusters where each node of the cluster is equipped with gpus .in this section , we describe basic concepts for our hybrid tree algorithm. motions of gravitational bodies follow the following equation of motion , here , is softened gravity force expressed as where is a position of a `` sink '' particle that is forced from other particles , and is a position of a `` source '' particle that exerts the force to other particles , is a mass of the sink particle , is a gravitational constant , and is the softening length to reduce non - realistic acceleration when .simply , we calculate all pair interactions of particles for calculating right hand side of equation ( [ nbforce ] ) .we call the simple algorithm for calculating force to a particle brute - force algorithm or direct algorithm. we solve the motion of particles by numerical integration of a position of each particle with the calculated force .actually , the integration is performed by updating velocities followed by updating positions of particles .this integration scheme is called the leap - frog method .the leap - frog scheme is a second - order symplectic integrator .a velocity and a position of a particle are updated as follow , where is velocity of a particles at time , is position of a particle at time , is a time - step for integration . in the following ,we call the velocity update as `` kick '' and the position update as `` drift '' .the tree algorithm is a technique for reducing the cost of force calculation for large - scale simulations .the concept of tree algorithm is that we approximate force from many distant source particles into force from one source particle as the center of mass of the particles .force calculation by tree algorithm is performed as follow : constructing tree ; calculating center of mass of tree nodes and criterion for depth of tree traversal ; and traversing tree and calculating force . for constructing tree structure ,we divide three dimensional space into eight equal size cells recursively from root cell that contains all particle in the system .the division is recursively continued while the cell has many particles than a critical number of particles . as the result ,the particles are placed on leaves of the tree .next , to approximate distant particles , we calculate center of mass of cell for each cell . then , we calculate multi - pole acceptance criterion ( mac ) of each tree node as the criterion for tree traversal .mac determines whether we further traverse leaf cells of the cell or calculate force from the cell .we use absolute mac , where is the maximum distance between the center of mass and particles in the cell , is a position of a center of mass of source cell , and is the number of particles in the cell . is a numerical parameter specified by user to control the accuracy of force calculation .finally , we traverse the tree for calculating force .we start from root cell .if , where is a distance between sink particle and center of mass of source cell , we further visit to leaf cells to traverse in more detail , else we add the force from the cell and go to next node .after tree traversal , we get the force of a sink particle by summing forces from source cells and particles . here, we explain an algorithm we proposed based on pppt algorithm for collision - less systems .our algorithm is the similar to pppt algorithm : splitting force into hard - force from near particles and soft - force from distant particles but we adopt a different numerical method for the hard - force part . in the original pppt algorithm ,the direct algorithm is used for high accuracy calculation of hard - force .the high accuracy is not necessary for collision - less simulation , thus we can design our algorithm for hard - force having lower accuracy than original pppt algorithm .another difference of our algorithm is that we try to speed - up the calculation by reducing communication cost in parallel computing .the force is divided by using a kernel function as follow . where is the hard - force , and is the soft - force . for a kernel function ,we use the dll function ( adopted in ) written as follows , where , and are constants specified by user determining the size of transition zone between hard and soft forces .we use the tree algorithm and the leap - frog for both of soft and hard forces but the time - step for soft ( ) and hard ( ) are different .we make the relation between and as , where .illustration of our integration is shown in figure [ fig : ht ] .we call the step calculating both of soft and hard force `` soft - step '' and the step of calculating only hard - force `` hard - step '' ; we need one soft - step and hard - steps to calculate time evolution for .since the soft - step theoretically require the position of all particles but the hard - step only relies on the position of near particles , we expect that calculation and communication cost of the proposed hybrid tree algorithm is lower than the normal tree algorithm .the time integration error of our algorithm expected to be slightly larger than the normal tree algorithm with time - step due to reduction of long - range force calculation .however , we can control the error by choosing appropriate parameters for , , , , and . .in this section , we present how we use make our parallel using openmp and mpi along with explanation for gpu computing . in this section ,we show the procedure and data structure for our tree algorithm .our method is based on as constructing tree by cpu and traversing tree by gpus . first , we construct tree structure of particles .we make cell - nodes above the particle - nodes and connect the nodes with pointers . in the method , each node has `` more '' pointer to the first leaf cell and `` next '' pointer to the next cell / particle to traverse skipping over leafs . for tree construction ,we first calculate the region and size of the root cell that include all particles .then , we calculate keys of the particles .we use the morton key as the key that is following order of morton curve ( or z - curve ) , a space filling curve .the advantage of moron key is that it encodes hierarchical information of position of particles .the key calculation is able to be executed individually for particles , thus we use openmp for parallelizing the calculation .second , we sort the keys .we use the extension of c++ standard library std::sort for sorting in parallel with openmp .we also sort the data of the positions of the particles to preserve locality of the particles .third , we divide the array into eight sub - arrays by the three most significant bits of the keys , then we set a cell node with `` more '' pointer and `` next '' pointer and make child cells with next pointers .then we recursively repeat the procedure for every three bits of key while the array has or more particles .if the array has particles fewer than , we treat each particle as leaf node . finally , we need to calculate center of mass and mac of each cell .both of them are calculated from position and mass of particles contained in the cell .thus , we calculate center of mass and mac by traversal of the part of tree , and the calculation is individual for each cell , and we parallelize the calculations by openmp .we use gpus for tree traversal and force calculation .this part is implemented in opencl , the framework for parallel computing .first , cpu sends the data of tree to gpu .then , we run the kernel code for traversing tree .the kernel code traverse the tree by indexing `` more '' and `` next '' pointers .the tree traversal is individually executed for each sink particle .thus , all threads are run by gpus in parallel .in addition , for reducing a number of tree traversal , multiple sink particles traverse in same thread .the number of particles traversing in same thread is .we typically set as efficient number of particles for gpu .as a distance for determining whether traverse the children or not , we use minimum distance between a source cell and sink particles ; we traverse the leaf if for .if particles are unsorted , and particles are distant each other , we need to traverse unnecessary nodes because may be small for distant cell .thus , for reducing unnecessary traversal , we should sort the particle data so that we retain data locality of positions of particles . our method of parallelization is that we assign each mpi process own region that contains subset of particles , and each mpi process calculates force by own particles and particles received from other processes .we have already presented the parallelization on each process attached gpu . here , we show the implementation of parallel computation and communication of our algorithm on gpu clusters .our procedure for parallel n - body simulations is as follow , 1 .domain decomposition 2 .constructing local tree 3 . calculating force from local tree 4 .communicating tree from remote processes 5 . calculating force from remote tree 6 .updating positions and velocities of local particles first , we need to distribute particle data to each process . to simplify communication for hard - force calculation , a shape of a region of a process should be a cuboid . as the method of cuboid domain decomposition , we use the method introduced in . with the method , we decompose whole region into regions , where and are the number of division in and direction .the decomposition is implemented as exchanging of particles between neighbor processes given pre - determined boundaries between regions . to determine the boundaries, we use the sampling domain decomposition method used in . in the method , we gather sample particles to a main process , then the main process tries to balance the boundaries such that each process has approximately same number of sample particles . . ] illustration of sampling domain decomposition is shown in figure [ fig : dd ] . here , the number of sample particles is defined for balancing the sum of the number of local particles that are assigned to each process and the number of particles received from other processes for hard - force calculation . for process is determined as follow ; where is the total number of particles , is sampling rate constant , and is a correction factor for balancing .we typically set in the present work . is the measure for load balancing defined as our intention is that we make the calculation cost for hard - force equal on all processes because the calculation is the majority of running time in our case .after the main process determines the boundaries , it broadcasts the result to all other processes , and each process exchanges necessary particle data between other processes .to reduce the cost of domain decomposition , we execute it only for every soft - step ; at hard - steps , a process has the same particles as the previous soft - step .after construction of tree structure of local particles , we need to communicate the particles of other processes to calculate the force from the particles . in our method , we need different set of particles for hard and soft force , respectively . for communicating the soft particles , we need data of all particles , but distant particles are able to be approximated as the center of mass of the cells . locally essential tree (let ) is the method for communicating only necessary part of tree for the processes . for determining the cells to send, we traverses the local tree with mac , where is the distance between center of mass of cell and boundary of process , and is the mac calculated by method in section [ sec : tree ] .we only send the position and the mass of center of mass for a cell .both the calculation cost for determining cells to send and the cost for communicating cells are . as the result of communication , a process get the cells that we need to calculate the force in the process as shown in figure [ fig : comm2 ] . here , we show the cells that upper - left process needs to receive from other processes .we use mpi asynchronous send and receive functions to exchange data . after the communication, the process concatenates the arrays of own particles and the cells received by neighbor processes and constructs a let .then , we traverse the tree and calculate force from the remote particles to local particles . for hard - force, we also use let scheme , but a process only need cells around the boundary that is at the distance less than as showing in the red cells in figure [ fig : comm2 ] . to determine the cells to send , we traverse the local tree .in addition to mac in equation ( [ eqn : mac ] ) , the condition to determine whether traverse the leaf for searching the cells or not is applied as follow after the tree traversal , we obtain the cells in the process that the distance to boundary of process is smaller than .the communication cost for hard - force is smaller than for soft - force . especially , the cost is significantly reduced in large number of processes because of reduction of volume that a process needs to consider .we execute the force calculation on gpus and other calculations on host cpu .thus , we overlap both calculation with communications . while traversing the tree in gpus , cpu communicate particles and construct a let using received particles . while gpus run kernels , we need to retain a thread for management of the gpu .the thread is generated by using pthread api , an programming interface the interface for thread programming .as the result of the overlap , the total calculation time of the overlapped processes is constrained by the maximum calculation time of the cpu threads and the gpu .however , we need a cpu thread for organizing queuing jobs to opencl device such as gpu . thus , performance of calculations that use openmp may be decreased .in this section , we present the performance evaluation of n - body simulations with our hybrid tree method . for the test of our algorithm , we use plummer model .the plummer model is a typical spherical model of n - body simulations .we set , and , where is mean velocity of the system in the case of our simulation , as being in range of optimal parameters for the model shown in .we have the following numerical parameters that control the balance between the execution time and accuracy of the simulations : , , , and . in the present work, we typically set . , , and should be adjusted for maintaining sufficient accuracy of error in total energy of the system that is the sum of kinetic and potential energy after simulation . by the result of test simulations for our model , we choose an optimal parameters as , , and we sent in the present work .development and computations for the present work have been carried out under the `` interdisciplinary computational science program '' in center for computational sciences , university of tsukuba .a node of ha - pacs has two intel e5 - 2670 ( 8 cores ) cpus with four nvidia tesla m2090 gpus .actually , we assign four mpi processes per node of ha - pacs such that one mpi process is exclusively assigned one gpu board . here, we compare the calculation cost of kernel for calculating hard - force by our algorithm and communication cost with the normal tree algorithm with let . in hard - force calculation , we can cut - off the tree traversal for distant cells .thus , the cost for hard calculation is reduced if we set small and . as the result of our test simulations , for between 256k and 4096k ,the time for calculating hard force at optimal is about 40% of the time for calculating force with normal tree algorithm .next , we analyze the cost for communicating of our algorithm with the normal tree algorithm . for the test , we set ( ) . in figure [ fig : nphs ] , the solid red line shows the ratio between the average number of hard particles and soft particles as a function of , the number of processes .this ratio is an indicator of the reduction of cost for gpu computing and is roughly constant at 40% .the dotted green line shows the ratio between the average number of local plus hard particles and local and soft particles .since the communication cost for our algorithm and the normal tree are proportional to and , respectively , we see that our algorithm works better in large due to the reduction of communication . for larger , we have smaller the ratio as 70 % at = 128 . and as a function of . ]we evaluate the scalability of our simulation with gpu clusters .it is not easy to reduce the execution time by number of processes linearly , e.g. good strong scaling , even if our hybrid tree algorithm can reduce the communication cost by reducing the volume of interest for communication . for the test, we run a series of simulations with 1 m ( m = ) to 64 m on up to using 32 nodes of ha - pacs .figure [ fig : scaltime ] shows the strong scaling result of our simulation ; capability of the speed - up with many processes for fixed total number of particles . here, we plot the average execution time for simulating time evolution as a function of .the time evolution of is completed with one soft - step and three hard - steps in the present work ( we set ) .we omit some cases of the simulations that were not able to run due to the limitation of gpu memory in the figure . the execution timeis reduced in for small , but the time hardly reduces at large and small . for ,the reduction of the execution time stops at . for , the execution timeis reduced approximately linearly , and the time at is 59% of the time at . as the result ,the execution time is sufficiently reduced when ; the calculation time with processes is typically less than 60% of that with processes . in figure[ fig : dettimet ] , we present the detailed breakdown of the execution time for hard - step , soft - step , and domain decomposition for the simulation with . here , , , , and are the execution times of total for simulating , one hard - step , one soft - step , and domain decomposition , respectively .the relation between those timing is expressed as . is reduced to about 60% of for any . and are reduced as increasing of . in for , time for force calculation , tree construction , and communicating cells are 26% , 37% , and 54% , respectively . the sum of percentages of time is larger than 100% because kernel execution on gpu and other processes on cpu are overlapped . is not reduced and be around 0.1 seconds for this case .the reason is that communication and calculation cost for domain decomposition depends on not but as shown in section [ sec : dd ] .since the core of our tree code is written in opencl api , we can use not only gpus but cpu threads to compute the tree travarsal kernels for hard and soft force . for m runs , the total execution time with is 113 , 30.6 , and 5.51 seconds , respectively while the runs with gpus took 13.6 , 3.55 , and 0.688 seconds .the speed - up factor due using gpus with is 8.3 , 8.6 , and 8.0 , respectively where we compare the time for all computation and comunication . to be more specific only on computation, we found the speed - up factor of the execution of opencl kernels is 11 - 16 times faster than the runs with cpu threads .our hybrid tree algorithm can take huge advantage of the acclearation with the gpu technology . here, we compare the result of the execution time of our algorithm to the normal tree algorithm with let that does not split force into two parts .to achieve approximately same total energy error between two algorithms , we set , where is the time - step of the normal tree .in addition , domain decomposition in the normal tree is executed every four steps to fairly compare the execution time .figure [ fig : ration ] shows the reduction of the execution time of the hybrid tree algorithm versus the normal tree algorithm , where and is the execution time of the hybrid tree and the normal tree for same simulation time .we can reduce the time to about 80% - 90% of that of the normal tree . especially , for , is even smaller as is larger .this means that our hybrid tree algorithm has the advantage for large - scale simulations .the theoretical reduction of the hybrid tree is estimated as where is the execution time of the normal tree algorithm . according to the results in section [ sec : red ] and [ sec : scal ] , it is expected that the hybrid tree algorithm can reduce the cost for hard - force to about 60 % of soft - force for large .thus , assuming that , , and , then ; we can ultimately speed - up the calculation with hybrid tree to 70% of the normal tree for large except for time for the domain decomposition .ogiya et al . implemented parallel tree n - body code on ha - pacs .their gpu code use the same algorithm also used in the present work .however , the detailed implementation details of their tree traversal kernels and domain decomposition are different . in , they presented a model of cdm ( cold dark matter ) , and they claimed it was hard to keep load balance when . for and , the execution time for four time - steps in 8.2 seconds and the execution time for three hard steps and one soft step in our work 6.8 seconds and =0.83 . for , 3.0 seconds and 1.1 seconds and =0.37 .although the implementation and a simulation model are different to , our algorithm can efficiently reduce execution time for scalable computation .in this work , we developed a new algorithm for n - body simulations named hybrid tree algorithm , the algorithm for accelerating collision - less n - body simulation by splitting the force from other particles into short - range and long - range forces .the proposed hybrid tree algorithm is effective to reduce the calculation cost and communication cost for simulations .we have implemented the algorithm on gpu clusters up to 128 processes , and we showed that the hybrid tree algorithm can reduce the execution time up to 80% of the normal tree algorithm .as future work , we should investigate the scalability and speed - up of our algorithm with more scalable computing systems .in addition , we will investigate whether our algorithm is efficient for other systems and other parameters because we have simulated the algorithm with only limited combinations of parameters and only on plummer model .shoichi oshino , yoko funato , junichiro makino , `` particle - particle particle - tree : a direct - tree hybrid scheme for collisional n - body simulations '' , publications of the astronomical society of japan , vol.63 , no.4 , 2011 , pp .881 - 892 go ogiya , masao mori , yohei miki , taisuke boku , naohito nakasato , `` studying the core - cusp problem in cold dark matter halos using n - body simulations on gpu clusters '' , 2013 j. phys .
we propose a hybrid tree algorithm for reducing calculation and communication cost of collision - less n - body simulations . the concept of our algorithm is that we split interaction force into two parts : hard - force from neighbor particles and soft - force from distant particles , and applying different time integration for the forces . for hard - force calculation , we can efficiently reduce the calculation and communication cost of the parallel tree code because we only need data of neighbor particles for this part . we implement the algorithm on gpu clusters to accelerate force calculation for both hard and soft force . as the result of implementing the algorithm on gpu clusters , we were able to reduce the communication cost and the total execution time to 40% and 80% of that of a normal tree algorithm , respectively . in addition , the reduction factor relative the normal tree algorithm is smaller for large number of processes , and we expect that the execution time can be ultimately reduced down to about 70% of the normal tree algorithm .
magneto - acousto - electric tomography ( maet ) is based on the measurements of the electrical potential arising when an acoustic wave propagates through conductive medium placed in a constant magnetic field .the interaction of the mechanical motion of the free charges ( ions and/or electrons ) with the magnetic field results in the lorentz force that pushes charges of different signs in opposite directions , thus generating lorentz currents within the tissue .the goal of this technique coincides with that of the electrical impedance tomography ( eit ) : to reconstruct the conductivity of the tissue from the values of the electric potential measured on the boundary of the object .eit is a fast , inexpensive , and harmless modality , which is potentially very valuable due to the large contrast in the conductivity between healthy and cancerous tissues .unfortunately , the reconstruction problems arising in eit are known to be exponentially unstable .maet is one of the several recently introduced hybrid imaging techniques designed to stabilize the reconstruction of electrical properties of the tissues by coupling together ultrasound waves with other physical phenomena .perhaps the best known examples of hybrid methods are the thermo - acoustic tomography ( tat ) and the closely related photo - acoustic modality , pat ) . in the latter methods the amount of electromagnetic energy absorbed by the medium is reconstructed from the measurements ( on the surface of the object ) of acoustic waves caused by the thermoacoustic expansion ( see e.g. ) .another hybrid technique , designed to overcome shortcomings of eit and yield stable reconstruction of the conductivity is acousto - electric impedance tomography ( aeit ) .it couples together acoustic waves and electrical currents , through the electroacoustic effect ( see ) .although aeit has been shown , both theoretically and in numerical simulations , to be stable and capable of yielding high - resolution images , the feasibility of practical reconstructions is still in question due to the extreme weakness of the acousto - electric effect . in the present paperwe analyze maet which also aims to reconstruct the conductivity in a stable fashion . in maetthis goal is achieved by combining magnetic field , acoustic excitation and electric measurements , coupled through the lorentz force .the physical foundations of maet were established in and . in particular ,it was shown in that if the tissue with conductivity moves with velocity within the constant magnetic field , the arising lorentz force will generate lorentz currents whose intensity and direction are given ( approximately ) by the following formula originally it was proposed to utilize a focused propagating acoustic pulse to induce electric response from different parts of the object . in wavepackets of a certain frequencywere used in a physical experiment to reconstruct the current density in a thin slab of a tissue .similarly , in the use of a perfectly focused acoustic beam was assumed in a theoretical study and in numerical simulations . however , in the above - quoted works accurate mathematical model(s ) of such beams were not presented . moreover ,the feasibility of focusing a fixed frequency acoustic beam at an arbitrary point inside the body in a fully 3d problem is problematic . in a theoretical study the use of plane waves of varying frequencieswas proposed instead of the beams .this is a more realistic approach ; however , the analysis in that work relies on several crude approximations ( the conductivity is assumed to be close to 1 , and the electric field is approximated by the first non - zero term in the multipole expansion ) . to summarize , the existing mathematical models of measurements in maet are of approximate nature ; moreover , some of them contradict to others .for example , it was found in that if one uses a pair of electrodes to measure the voltage ( difference of the potentials ) at two points and on the boundary of the body , the result is the integral of the mixed product of three vectors : velocity , magnetic induction and the so - called lead current ( the current that would flow in the body if the difference of potentials were applied at points and ) .the approximate model in implicitly agrees with this conclusion .however , in it is assumed that if the pulse is focused at the point , the measurements will be proportional to the product of the electric potential and conductivity at that point .this assumption contradicts the previous models ; it also seems to be unrealistic since potential is only defined up to an arbitrary constant , while the measurements are completely determined by the physics of the problem . in the present paperwe first derive , starting from equation ( [ e : lorentz ] ) , a rigorous and sufficiently general model of the maet measurements .next , we show that if a sufficient amount of data is measured , one can reconstruct , almost explicitly and in a stable fashion the conductivity of the tissue . for general domains the reconstruction can be reduced to the solution of the inverse problem of tat followed by the solution of the neumann problem for the laplace equation , and a poisson equation . in the simpler case of a rectangular domainthe reconstruction formulae can be made completely explicit , and the solution is obtained by summing several fourier series . in the latter case the algorithm is fast , i.e. it runs in floating point operations on a grid .the results of our numerical simulations show that one can stably recover high resolution images of the conductivity of the tissues from maet measurements even in the presence of a significant noise in the data .suppose that the object of interest whose conductivity we would like to recover is supported within an open and bounded region with the boundary .for simplicity we will assume that is smooth in does not approach 0 , and equals 1 in the vicinity of ; the support of lies in some and the distance between and is non - zero .the object is placed in the magnetic field with a constant magnetic induction , and an acoustic wave generated by a source lying outside propagates through the object with the velocity .then the lorentz force will induce lorentz currents in given by equation ( [ e : lorentz ] ) . throughout the textwe assume that the electrical interactions occur on much faster time scale than the mechanical ones , and so all currents and electric potentials depend on only as a parameter .in addition to lorentz currents , the arising electrical potential will generate secondary , ohmic currents with intensities given by ohm s law since there are no sinks or sources of electric charges within the tissues , the total current is divergence - free thus since there are no currents through the boundary , the normal component of the total current vanishes : where is the exterior normal to at point .we will assume that the boundary values of the potential can be measured at all points lying on .more precisely , we will model the measurements by integrating the boundary values with a weight and thus forming measuring functional defined by the formula where is the standard area element .weight can be a function or a distribution , subject to the restriction in particular , if one chooses to use , where is the 2d dirac delta - function , then models the two - point measuring scheme utilized in and . in order to understand what kind of informationis encoded in the values of let us consider solution of the following divergence equation ( to ensure the uniqueness of the solution of the above boundary value problem we will require that the integral of over vanishes . )then equals the electric potential that would be induced in the tissues by injecting currents at the boundary .let us denote the corresponding currents by : let us now apply the second green s identity to functions , , and :=\int\limits_{\partial\omega}\sigma\left [ w_{j}% \frac{\partial}{\partial n}u - u\frac{\partial}{\partial n}w_{j}\right ] da(z ) .\label{e : green}%\ ] ] by taking into account ( [ e : inhomobc ] ) , ( [ e : inhomo ] ) , ( [ e : homodiv ] ) , and ( [ e : homobc ] ) , equation ( [ e : green ] ) can be simplified to further , by integrating the left hand side of the last equation by parts , and by replacing with expression ( [ e : inhomobc ] ) we obtain or this equation generalizes equation ( 1 ) in obtained for the particular case .it is clear from equation ( [ e : vcrossb ] ) that maet measurements recover some information about currents . in order to gain further insightlet us assume that the acoustical properties of the medium , such as speed of sound and density are approximately constant within .( such approximation usually holds in breast imaging which is one of the most important potential applications of this and similar modalities ) .then the acoustic pressure within satisfies the wave equation additionally , is the time derivative of the velocity potential ( see , for example ) , so that velocity potential also satisfies the wave equation now , by taking into account ( [ e : velpoten ] ) , equation ( [ e : vcrossb ] ) can be re - written as further , by noticing that we obtain\nonumber\\ & = \frac{1}{\rho}b\cdot\left [ \int\limits_{\partial\omega}\varphi ( z , t)j_{i}(z)\times n(z)da(z)+\int\limits_{\omega}\varphi(x , t)\nabla\times j_{i}(x)dx\right ] .\label{e : surfterm}%\end{aligned}\ ] ] in some situations the above equations can be further simplified .for example , if at some moment of time velocity potential vanishes on the boundary , then the surface integral in ( [ e : surfterm ] ) also vanishes: similarly , if boundary is located far away from the support of inhomogeneity of , the surface integral in ( [ e : surfterm ] ) can be neglected , and we again obtain equation ( [ e : model ] ) . equation ( [ e : surfterm ] ) is our mathematical model of the maet measurements . our goal is to reconstruct from measurements conductivity by varying , if necessary , , , and .our strategy for solving this problem is outlined in the following sections .however , some conclusions can be reached just by looking at the equation ( [ e : surfterm ] ) .for example , one can notice that if three sets of measurements are conducted with magnetic induction pointing respectively in the directions of canonical basis vectors , , and , one can easily reconstruct the sum of integrals in the brackets in ( [ e : surfterm ] ) .further , if one focuses so that at the moment it becomes the dirac -function centered at , i.e. then one immediately obtains the value of at the point ( such a focusing is theoretically possible as explained in the next section ) .thus , by moving the focusing point through the object , one can reconstruct the curl of in all of .our model also explains the observation reported in that no signal is obtained when the acoustic wavepacket is passing through the regions of the constant . in such regionscurrent is a potential vector field and , therefore , the integral in ( [ e : model ] ) vanishes .finally , it becomes clear that an accurate image reconstruction is impossible if monochromatic acoustic waves of only a single frequency are used for scanning , no matter how well they are focused . in this casethe spatial component of is a solution of the helmholtz equation and , within it can be approximated by the plane waves in the form with .let us assume for simplicity that the electrical boundary is removed to infinity .then , measuring given by equation ( [ e : model ] ) is equivalent to collecting values of the fourier transform of corresponding to the wave vectors lying on the surface of the sphere in the fourier domain .the spatial frequencies of function with wave vectors that do not lie on this sphere can not be recovered .the first step toward the reconstruction of the conductivity is to reconstruct currents corresponding to certain choices of .let us assume that all the measurements are repeated three times , with magnetic induction pointing respectively in the directions of canonical basis vectors , , and . then , as mentioned above , if , one readily recovers from the measurements the curl of the current at , i.e. . generating such a velocity potential is possible at least theoretically . for example , if one simultaneously propagates plane waves with all possible wave vectors , the combined velocity potential at the moment will add up to the dirac delta - function .such an arrangement is unlikely to be suitable for a practical implementation : firstly , the sources of sound would have to be removed far from the object to produce a good approximation to plane waves within the object .secondly , the sources would have to completely surround the object to irradiate it from all possible directions .finally , all the sources would have to be synchronized .a variation of this approach is to place small point - like sources in the vicinity of the object . in this case , instead of plane waves , spherical monochromatic waves or propagating spherical fronts would be generated .these types of waves can also be focused into a delta - function ( some discussion of such focusing and a numerical example can be found in ) . however , a more practical approach is to utilize some realistic measuring configuration ( e.g. one consisting of one or several small sources scanning the boundary sequentially ) , and then to synthesize algorithmically from the realistic data the desired measurements that correspond to the delta - like velocity potential .such a _ synthetic focusing _ was first introduced in the context of hybrid methods in .it was shown , in applications to aet and to the acoustically modulated optical tomography , that such a synthetic focusing is equivalent to solving the inverse problem of tat .the latter problem has been studied extensively , and a wide variety of methods is known by now ( we will refer the reader to reviews and references therein ) .the same technique can be applied to maet , as explained below .let us consider a spherical propagating front originated at the point .if the initial conditions on the pressure are{c}% p_{y}(x,0)=\delta(x - y),\\ \frac{\partial}{\partial t}p_{y}(x,0)=0 , \end{array } \right .\nonumber\ ] ] then can be represented in the whole of by means of the kirchhoff formula latexmath:[\[p_{y}(x , t)=\frac{\partial}{\partial t}\frac{\delta(|x - y|-ct)}{4\pi placed at and excited by a delta - like electric pulse ; such devices are common in ultrasonic imaging .velocity potential corresponding to then equals the role of variables and is clearly interchangeable ; is the retarded green s function of the wave equation either in and , or in and .moreover , consider the following convolution of a finitely supported smooth function with latexmath:[\[h(y , t)=\int\limits_{\mathbb{r}^{3}}h(y)\frac{\delta(|x - y|-ct)}{4\pi following initial value problem ( ivp ) in : {c}% \frac{1}{c^{2}}\frac{\partial^{2}}{\partial t^{2}}h(y , t)=\delta_{y}h(y , t)\\ h(y,0)=0,\\ \frac{\partial}{\partial t}h(y,0)=h(y ) . \end{array } \right .\label{e : wavesyst}%\ ] ] suppose now that a set of maet measurements is obtained with propagating wave fronts with different centers ( while and are kept fixed ) . by substituting ( [ e : acfront ] ) into ( [ e : surfterm ] ) we find that , for each the corresponding measuring functional can be represented as the sum of two terms : where it is clear from the above discussion that both terms and solve the wave equation in , subject to the initial conditions where is the delta - function supported on . while singular term solves the wave equation in the sense of distributions , the regular term represents a classical solution of the wave equation .suppose conductivity and boundary currents are functions of their arguments , and the boundary is infinitely smooth .then the regular part of the measuring functional is a solution of the wave equation satisfying initial conditions ( [ e : initderreg ] ) and ( [ e : initfun ] ) . under the above conditions , potential solving the boundary value problem ( [ e : homodiv ] ) , ( [ e : homobc ] ) is a function in due to the classical estimates on the smoothness of solutions of elliptic equations with smooth coefficients .therefore , the right hand side of ( [ e : initderreg ] ) can be extended by zero to a function in term defined by equation ( [ e : realmereg ] ) solves wave equation ( [ e : waveeqm ] ) ( due to the kirchhoff formula , see ) subject to infinitely smooth initial conditions ( [ e : initderreg ] ) , ( [ e : initfun ] ) , and thus it is a function for all we would like to reconstruct the right hand side of ( [ e : initderreg ] ) ( and , possibly that of ( [ e : initdersing ] ) ) from the measured values of .since is assumed constant , in the 3d case will vanish ( due to the huygens principle ) for where is the maximal distance between the points of and the acoustic sources .we will assume that is measured for all ] , the term can be exactly reconstructed in ( by using one of the above - mentioned tat algorithms ) . moreover , if the conditions of proposition 1 are satisfied , the reconstruction is exact point - wise . [[ reconstructing - the - curl ] ] reconstructing the curl + + + + + + + + + + + + + + + + + + + + + + + in order to reconstruct the curl of the current we need to repeat the procedure of finding times , with three different orientations of , . as a result, we find the projections of curl on and thus obtain: ( outside the curl of equals 0 since the conductivity is constant there ) .if , in addition , lies inside and has been reconstructed , we obtain the term the considerations of the previous section show how to recover from the values of the measuring functionals the curl of the current and , in some situations , the surface term ( [ e : surface ] ) .the next step is to reconstruct current itself .let us start with the most general situation and assume that only the curl has been reconstructed .since current is a purely solenoidal field , there exists a vector potential such that where has the form and is both a solenoidal and potential field .then there exists harmonic such that we know that therefore , by combining equations ( [ e : hdec ] ) and ( [ e : bc ] ) one obtains and now can be recovered , up to an arbitrary additive constant , by solving the neumann problem{c}% \delta\psi(x)=0,\quad x\in\omega\\ \frac{\partial}{\partial n}\psi(z)=i(z)-n\cdot\left ( \nabla\times \int\limits_{\omega}\frac{c(y)}{4\pi(z - y)}dy\right ) , \quad z\in\partial \omega . \end{array } \right .\label{e : neumann}%\ ] ] now is uniquely defined by the formula under smoothness assumption of proposition 1 current is given by the formula ( [ e : currentformula ] ) , where function is the ( classical ) solution of the neumann problem ( [ e : neumann ] ) .if in addition to the curl , the surface term ( equation ( [ e : surface ] ) ) has been reconstructed , there is no need to solve the neumann problem . instead ,function is given explicitly by the following formula : the final expression for current can now be written as \nonumber\\ & = \nabla_{x}\times\int\limits_{\bar{\omega}}\frac{c(y)+j_{i}(y)\times n(y)\delta_{\partial\omega}(y)}{4\pi(x - y)}dy,\quad x\in\omega,\label{e : fancy}%\end{aligned}\ ] ] where is the closure the term with the delta - function in the numerator of ( [ e : fancy ] ) coincides with the surface term given by equation ( [ e : surface ] ) . in order to avoid the direct numerical reconstruction of the singular term, one may want to try to modify the utilized tat reconstruction algorithm so as to recover directly the convolution with contained in equation ( [ e : fancy ] ) .the practicality of such an approach requires further investigation .finally , in certain simple domains one can find a way to solve the equation for in such a way as to explicitly satisfy boundary conditions ( [ e : bc ] ) and thus to avoid the need of solving the neumann problem ( [ e : neumann ] ) .one such domain is a cube ; we present the corresponding algorithm in section [ s : cube ] . in order to reconstruct the conductivity we will utilize three currents , , corresponding to three different boundary conditions . as we mentioned before we are using three different magnetic inductions , . as a result, we obtain the values of the following measuring functionals where is the electric potential corresponding to the acoustic wave with the velocity potential propagating through the body in the presence of constant magnetic field .notice that the increase in the number of currents does not require additional physical measurements : the same measured boundary values of are used to compute different measuring functionals by changing the integration weight in equation ( [ e : funcdef ] ) . for each of the currents we apply one of the above - mentioned tat reconstruction techniques to compute , the knowledge of the latter functions for , , allows us to recover the curls , , ( equation ( [ e : curl ] ) ) and , possibly , the surface terms ( [ e : surface ] ) .finally , currents are reconstructed by one of the methods described in the previous section . at the first sight , finding from the knowledge of , , is a non - linear problem , since the unknown electric potentials depend on . however , as shown below , this problem can be solved explicitly without a linearization or some other approximation . indeed , for any , the following formula holds: so that now one can try to find at each point in by solving the following ( in general ) over - determined system of linear equations : {c}% \nabla\ln\sigma\times j^{(1)}=c^{(1)}\\ \nabla\ln\sigma\times j^{(2)}=c^{(2)}\\ \nabla\ln\sigma\times j^{(3)}=c^{(3)}% \end{array } \right . .\label{e: linsys}%\ ] ] let us assume first , that currents , form a basis in at each point in .there are 9 equations in system ( [ e : linsys ] ) , whose unknowns are the three components of , but the rank of the corresponding matrix does not exceed 6 . in order to see this , let us multiply each equation of ( [ e : linsys ] ) by .( since the three currents form a basis , this is equivalent to a multiplication by a non - singular matrix ) .we obtain{c}% \nabla\ln\sigma\cdot(j^{(1)}\times j^{(2)})=c^{(1)}\cdot j^{(2)}\\ \nabla\ln\sigma\cdot(j^{(1)}\times j^{(2)})=-c^{(2)}\cdot j^{(1)}\\ \nabla\ln\sigma\cdot(j^{(1)}\times j^{(3)})=c^{(1)}\cdot j^{(3)}\\ \nabla\ln\sigma\cdot(j^{(1)}\times j^{(3)})=-c^{(3)}\cdot j^{(1)}\\ \nabla\ln\sigma\cdot(j^{(2)}\times j^{(3)})=c^{(2)}\cdot j^{(3)}\\ \nabla\ln\sigma\cdot(j^{(2)}\times j^{(3)})=-c^{(3)}\cdot j^{(2)}% \end{array } \right . .\nonumber\ ] ] in the case of perfect measurements the right hand sides of equations number 2 , 4 , and 6 in the above system would coincide with those of equations 1 , 3 , and 5 , respectively , and therefore the even - numbered equations could just be dropped from the system. however , in the presence of noise it is better to take the average of the equations with identical left sides , which is equivalent to finding the least squares solution of this system .we thus obtain:{c}% \nabla\ln\sigma\cdot(j^{(1)}\times j^{(2)})=\frac{1}{2}(c^{(1)}\cdot j^{(2)}-c^{(2)}\cdot j^{(1)})\\ \nabla\ln\sigma\cdot(j^{(1)}\times j^{(3)})=\frac{1}{2}(c^{(1)}\cdot j^{(3)}-c^{(3)}\cdot j^{(1)})\\ \nabla\ln\sigma\cdot(j^{(2)}\times j^{(3)})=\frac{1}{2}(c^{(2)}\cdot j^{(3)}-c^{(3)}\cdot j^{(2 ) } ) \end{array } \right . .\label{e: leastsq}%\ ] ] after some simple linear algebra transformations ( see appendix ) the solution of ( [ e : leastsq ] ) can be written explicitly as follows:{c}% c^{(2)}\cdot j^{(3)}-c^{(3)}\cdot j^{(2)}\\ -c^{(1)}\cdot j^{(3)}+c^{(3)}\cdot j^{(1)}\\ c^{(1)}\cdot j^{(2)}-c^{(2)}\cdot j^{(1)}% \end{array } \right ) , \label{e : gradsol}%\ ] ] where is matrix whose columns are the cartesian coordinates of the currents :{1pt}{26pt}j^{(2)}\rule[-10pt]{1pt}{26pt}% j^{(3)}\right ) .\label{e : matrix}%\ ] ] since , by assumption , currents form a basis at each point of , the denominator in ( [ e : gradsol ] ) never vanishes and , thus , equation ( [ e : gradsol ] ) can be used to reconstruct in all of .finally , we compute the divergence of both sides in ( [ e : gradsol]):{c}% c^{(2)}\cdot j^{(3)}-c^{(3)}\cdot j^{(2)}\\ -c^{(1)}\cdot j^{(3)}+c^{(3)}\cdot j^{(1)}\\ c^{(1)}\cdot j^{(2)}-c^{(2)}\cdot j^{(1)}% \end{array } \right ) \right ] , \label{e : finalsys}%\ ] ] and solve the above poisson equation for in subject to the dirichlet boundary conditions the above reconstruction procedure works if currents , , and are linearly independent at each point in . for an arbitrary conductivity this can not be guaranteed .there exists a counterexample describing such a conductivity for which a boundary condition can be found such that the corresponding current vanishes at a certain point within the domain .clearly , while such a situation can occur , it is unlikely to occur for an arbitrary conductivity , and our method should still be useful in practice .moreover , the condition of the three currents forming a basis at each point in space can be relaxed .below we show that if only one of the currents , , and vanishes ( say , ) at some point and the two other currents are not parallel , the following truncated system is still uniquely solvable:{c}% \nabla\ln\sigma\times j^{(1)}=c^{(1)}\\ \nabla\ln\sigma\times j^{(2)}=c^{(2)}% \end{array } \right . .\label{e : smallsys}%\ ] ] indeed , let us multiply via dot product the above equations by and respectively , and subtract them .we obtain now , multiply the first equation in ( [ e : smallsys ] ) by .the left hand side will take the form \\ & = \nabla\ln\sigma\cdot\left [ \left ( j^{(1)}\cdot j^{(2)}\right ) j^{(1)}-\left ( j^{(1)}\cdot j^{(1)}\right ) j^{(2)}\right ] , \end{aligned}\ ] ] which leads to the equation = c^{(1)}% \cdot(j^{(1)}\times j^{(2)}).\label{e : new2}%\ ] ] similarly , by multiplying the second equation in ( [ e : smallsys ] ) by we obtain = c^{(2)}% \cdot(j^{(2)}\times j^{(1)}).\label{e : new3}%\ ] ] equations ( [ e : new1 ] ) , ( [ e : new2 ] ) and ( [ e : new3 ] ) form a linear system with three equations and three unknowns .in other to show that the matrix of this system is non - singular , it is enough to show that the vectors given by the bracketed expressions in ( [ e : new2 ] ) and ( [ e : new3 ] ) are not parallel .the cross - product of these terms yields\times\lbrack ... ]=(j^{(1)}\times j^{(2)})\left [ \left ( j^{(1)}\cdot j^{(2)}\right ) ^{2}-\left ( j^{(1)}\cdot j^{(1)}\right ) \left ( j^{(2)}\cdot j^{(2)}\right ) \right ] .\nonumber\ ] ] the above expression is clearly non - zero if and are not parallel , and therefore the system of the three equations ( [ e : new1 ] ) , ( [ e : new2 ] ) , ( [ e : new3 ] ) is uniquely solvable in this case .suppose that the conditions of proposition 1 are satisfied , and that the conductivity and boundary currents are such that at each point two of the three correspondent currents are non - parallel .then the logarithm of the conductivity is uniquely determined by the values of the measuring functionals and . ] , and the sound sources are located on .( in practice such a measuring configuration will occur if the object is placed in a cubic tank filled with conductive liquid , and the sound sources and electrical connections are placed on the tank walls ) .we will use three boundary conditions defined by the formulae:{cc}% \frac{1}{2 } , & x\in\partial\omega,\quad x_{k}=1\\ -\frac{1}{2 } , & x\in\partial\omega,\quad x_{k}=0\\ 0 , & x\in\partial\omega,\quad0<x_{k}<1 \end{array } \right . , \quad k=1,2,3,\quad x=(x_{1},x_{2},x_{3 } ) .\label{e : currents}%\ ] ] as before , all the measurements are repeated with three different direction of the magnetic field , , and the values of functionals ( see equation ( [ e : goodfunc ] ) ) are computed from the measurements of the electrical potentials on , for ] , such that , for and the first eight derivatives of vanish at and at .radius was equal to 0.34 in this simulation .a gray scale picture of this phantom is shown in figure [ f : recsmoo](a ) .figure [ f : recsmoo](b ) demonstrates the cross - section by the plane of the image reconstructed on a computational grid from simulated maet data with added simulated noise .the acoustic sources were located at the nodes of cartesian grids on each of the six faces of cubic domain . for each source223 values of each of the measuring functionals were computed , representing 223 different time samples or , equivalently , 223 different radii of the propagating acoustic front .the measurement noise was simulated by adding values of uniformly distributed random variable to the data .the so - simulated noise was scaled in such a way that for each time series ( one source position ) the noise intensity in norm was 50% of the intensity of the signal ( i.e. of the norm of the data sequence representing the measuring functional ) . in spite of such high level of noise in the data , the reconstructed image shown in figure [ f : recsmoo](b ) contains very little noise .this can also be verified by looking at the plot of the cross section of the latter image along the line , presented in figure [ f : smoprof ] .[ c]cc with phantom shown in figure [ f:3dphan ] ; ( a ) and ( c ) are the cross sections of the phantom by planes and , respectively ; ( b ) and ( d ) are the corresponding cross sections of the reconstruction from the data with added ( in sense ) noise , title="fig:",width=220,height=220 ] & with phantom shown in figure [ f:3dphan ] ; ( a ) and ( c ) are the cross sections of the phantom by planes and , respectively ; ( b ) and ( d ) are the corresponding cross sections of the reconstruction from the data with added ( in sense ) noise , title="fig:",width=220,height=220 ] + ( a ) & ( b ) + & + with phantom shown in figure [ f:3dphan ] ; ( a ) and ( c ) are the cross sections of the phantom by planes and , respectively ; ( b ) and ( d ) are the corresponding cross sections of the reconstruction from the data with added ( in sense ) noise , title="fig:",width=220,height=220 ] & with phantom shown in figure [ f:3dphan ] ; ( a ) and ( c ) are the cross sections of the phantom by planes and , respectively ; ( b ) and ( d ) are the corresponding cross sections of the reconstruction from the data with added ( in sense ) noise , title="fig:",width=220,height=220 ] + ( c ) & ( d ) in order to better understand the origins of such unusually low noise sensitivity , we plot in figure [ f : sigprof ] a profile of one of the time series , for point .the thick black line represents the accurate measurements , the gray line shows the with the added noise . in figure [ f : crlprof ] we plot a profile of the reconstructed curl ( gray line ) against the correct values ( black line ) .( this plot corresponds to the cross - section of the third component of along the line , .the latter figure shows that noise is amplified during the first step of the reconstruction ( inversion of the spherical mean radon transform ) .this is to be expected , since the corresponding inverse problem is mildly ill - posed , similarly to the inversion of the classical radon transform .however , on the second step of the reconstruction , corresponding to solving the problem ( [ e : finalsys ] ) , the noise is significantly smoothed out .this is not surprising , since the corresponding operator is a smoothing one .as a result , we obtain the low - noise image shown in figure [ f : recsmoo](b ) .the second simulation we report used an ( almost ) piece - wise constant phantom of modeled by a linear combination of several slightly smoothed characteristic functions of balls of different radii .the centers of the balls were located on the pair - wise intersections of planes , , , as shown in figure [ f:3dphan ] .the minimum value of in this phantom was 0 ( dark black color ) , the maximum value is 1 ( white color ) .the simulated maet data corresponded to the acoustic sources located at the nodes of cartesian grids on each of the six faces of cubic domain . for each source , a time series consisting of 447 values for each measuring functional were simulated . in order to model the noise , to each of the time series we added a random sequence scaled so that the norm of the noise was equal to that of the signal ( i.e. noise was applied ) . in figure [ f : noiseprof ] we present the profile of the time series for the point .as before , the thick black line represents the accurate measurements , and the gray line shows the data with added noise . ,the thick black line represents the phantom , the gray line corresponds to the image reconstructed from the data with added ( in sense ) noise , width=288,height=134 ] the reconstruction was performed on the grid of size .the cross sections of the reconstructed image by planes and are shown in the figure [ f : rec3dflat](b ) and ( d ) , next to the corresponding images of the phantom ( i.e. parts ( a ) and ( c ) of the latter figure ) .the cross section profile of the image shown in part ( d ) , corresponding to the line , is plotted in figure [ f : coolprof ] . as in the first simulation, we obtain a very accurate reconstruction with little noise .this is again the result of a smoothing operator applied when the poisson problem is solved on the last step of the algorithm .an additional improvement in the quality of the image comes from the rather singular nature of the second phantom .indeed , while the noise is more or less uniformly distributed over the volume of the cubic domain , the signal ( the non - zero ) is supported in a rather small fraction of the volume , thus increasing the visual contrast between the noise and the signal .in section 1 we presented a mathematical model describing the maet measurements . in general, it agrees with the model used in .however , instead of point - wise electrical boundary measurements we consider a more general scheme .the advantage of such an approach is generality and ease of analysis and numerical modeling . in particular, it contains as a partial case the pointwise measurement of electrical potentials ( reported in ) .another novel element in this model is the use of velocity potentials which allow us to simplify analysis and obtain a better understanding of the problem at hand .we discussed in detail the case of acoustic signal presented by propagating acoustic fronts from small sources .however , the same mathematics can be used to model time - harmonic sources .since the problem is linear with respect to the velocity potential , the connection between the two problems is through the direct and inverse fourier transforms of the data in time . finally , plane wave irradiation ( considered for example in ) is a partial case of irradiation by time harmonic sources , when they are located far away from the object .in section 2 we presented a general scheme for the solution of the inverse problem of maet obtained under the assumption of propagating spherical acoustic fronts .( as we mentioned above , a slight modification of this scheme would allow one to utilize time harmonic sources and plane waves instead of the fronts we used ) .the scheme consists of the following steps : 1 .apply one of the suitable tat reconstructions techniques to measuring functionals to reconstruct the regular terms at and thus to obtain the curls of 2 .compute currents from their curls ( this step may require solving the neumann problem for the laplace equation ) 3 .find at each point in using formula ( [ e : gradsol ] ) or by solving system of equations ( [ e : new1 ] ) , ( [ e : new2 ] ) , ( [ e : new3 ] ) .4 . find values of by computing the divergence of 5 .compute by solving the poisson problem with the zero dirichlet boundary conditions .theoretical properties and numerical methods for all three steps are well known .the first step is mildly ill - posed ( similar to the inversion of the classical radon transform ) , the second step is stable , and the third step is described by a smoothing operator .our rather informal discussion suggests that the total reconstruction procedure is stable ( it does not exhibit even the mild instability present in classical computer tomography ) , and our numerical experiments confirm this assertion .we leave a rigorous proof of this conjecture for the future work .maet is similar to aet in that it seeks to overcome the instability of eit by adding the ultrasound component to the electrical measurements . however , maet has some advantages : 1 .the arising problem is linear and can be solved explicitly .2 . the aeit measurements seem to produce a very weak signal ; successful acquisition of such signals in a realistic measuring configuration have not been reported so far .the signal in maet is stronger ; in fact , first reconstructions from real measurements have already been obtained . in section 3we presented a completely explicit set of formulae that yield a series solution of the maet problem for the case of the cubic domain .it reduces the problem to a set of sine and cosine fourier transforms , and thus , it can be easily implemented using ffts .this , in turn , results in a fast algorithm that requires floating point operations to complete a reconstructions on a cartesian grid .it is theoretically possible to shorten the potentially long acquisition time by reducing the number of different directions of .if only two orthogonal directions of magnetic field are used , only two components of a curl will be reconstructed on the first step of our method ( say and however , since , since vanishes on , the above equation can be integrated in and thus can be reconstructed from and . a further study is needed to see how much this procedure would affect the stability of the whole method .the author gratefully acknowledges support by the nsf through the dms grant 0908208 .consider the following system of linear equations{c}% x\cdot(a\times b)=r_{1}\\ x\cdot(a\times c)=r_{2}\\ x\cdot(b\times c)=r_{3}% \end{array } \right .\label{e : mat0}%\ ] ] where , , and are given linearly independent vectors from , is the vector of unknowns , and , , are given numbers .the first equation can be re - written in the following form{ccc}% x_{1 } & x_{2 } & x_{3}\\ a_{1 } & a_{2 } & a_{3}\\ b_{1 } & b_{2 } & b_{3}% \end{array } \right\vert = \left\vert \begin{array } [ c]{ccc}% x_{1 } & a_{1 } & b_{1}\\ x_{2 } & a_{2 } & b_{2}\\ x_{3 } & a_{3 } & b_{3}% \end{array } \right\vert = \left\vert \begin{array } [ c]{ccc}% x_{1 } & b_{1 } & c_{1}\\ x_{2 } & b_{2 } & c_{2}\\ x_{3 } & b_{3 } & c_{3}% \end{array } \right\vert \nonumber\ ] ] or{ccc}% x_{1 } & b_{1 } & c_{1}\\ x_{2 } & b_{2 } & c_{2}\\ x_{3 } & b_{3 } & c_{3}% \end{array } \right\vert , \label{e : mat1}%\ ] ] where is a matrix whose columns are vectors , , and , and .similarly,{ccc}% a_{1 } & x_{1 } & c_{1}\\ a_{2 } & x_{2 } & c_{2}\\ a_{3 } & x_{3 } & c_{3}% \end{array } \right\vert , \label{e : mat2}%\ ] ] and{ccc}% x_{1 } & b_{1 } & c_{1}\\ x_{2 } & b_{2 } & c_{2}\\ x_{3 } & b_{3 } & c_{3}% \end{array } \right\vert , \label{e : mat3}%\ ] ] where and .formulae ( [ e : mat1])-([e : mat3 ] ) can be viewed as the solution of the following system of equations obtained using cramer s rule:{ccc}% a_{1 } & b_{1 } & c_{1}\\ a_{2 } & b_{2 } & c_{2}\\ a_{3 } & b_{3 } & c_{3}% \end{array } \right ) \left ( \begin{array } [ c]{c}% r_{3}\\ r_{2}\\ r_{1}% \end{array } \right ) = \left ( \begin{array } [ c]{c}% x_{1}\\ x_{2}\\ x_{3}% \end{array } \right ) .\nonumber\ ] ] therefore , solution of system ( [ e : mat0 ] ) is given by the formula{ccc}% a_{1 } & b_{1 } & c_{1}\\ a_{2 } & b_{2 } & c_{2}\\ a_{3 } & b_{3 } & c_{3}% \end{array } \right ) \left ( \begin{array } [ c]{c}% r_{3}\\ -r_{2}\\ r_{1}% \end{array } \right ) .\nonumber\ ] ] in addition , .m. agranovsky and p. kuchment , uniqueness of reconstruction and an inversion procedure for thermoacoustic and photoacoustic tomography with variable sound speed , _ inverse problems _ * 23 * ( 2007 ) 2089102 .g. ambartsoumian and s. patch , thermoacoustic tomography : numerical results .proceedings of spie 6437 _ photons plus ultrasound : imaging and sensing 2007 : the eighth conference on biomedical thermoacoustics , optoacoustics , and acousto - optics _ , ( 2007 ) alexander a. oraevsky , lihong v. wang , editors , 64371b .y. capdeboscq , j. fehrenbach , f. de gournay , o. kavian , imaging by modification : numerical reconstruction of local conductivities from corresponding power density measurements , _ siam j. imaging sciences , _ * 2/4 * ( 2009 ) 10031030 .l. kunyansky and p. kuchment , synthetic focusing in acousto - electric tomography , in _ oberwolfach report _ no .18/2010 doi : 10.4171/owr/2010/18 , workshop : mathematics and algorithms in tomography , organised by martin burger , alfred louis , and todd quinto , april 11th 17th , ( 2010 ) 4447 .s. j. norton and m. linzer , ultrasonic reflectivity imaging in three dimensions : exact inverse scattering solutions for plane , cylindrical , and spherical apertures , _ ieee trans . on biomed .* 28 * ( 1981 ) 200202 .
magneto - acousto - electric tomography ( maet ) , also known as the lorentz force or hall effect tomography , is a novel hybrid modality designed to be a high - resolution alternative to the unstable electrical impedance tomography . in the present paper we analyze existing mathematical models of this method , and propose a general procedure for solving the inverse problem associated with maet . it consists in applying to the data one of the algorithms of thermo - acoustic tomography , followed by solving the neumann problem for the laplace equation and the poisson equation . for the particular case when the region of interest is a cube , we present an explicit series solution resulting in a fast reconstruction algorithm . as we show , both analytically and numerically , maet is a stable technique yilelding high - resolution images even in the presence of significant noise in the data . [ theorem]acknowledgement [ theorem]proposition [ theorem]lemma [ 1][proof]*#1 . * ' '' ''
the 2011 tohoku earthquake with magnitude of 9.0 or 9.1 struck the northeastern part of japan .the earthquake was ranked as the fourth largest in the world .the overall cost of the damage has been estimated to be the tens of billions of us dollars , and 19,295 were killed and 359,073 houses were destroyed by the earthquake and resulting tsumani .this demonstrated that extreme events actually occur , implying that the probability of extreme events is small yet nonnegligible .this behavior can be characterized by large variances and fat tails in probability distributions both of damage and of natural disaster .statistical properties of damage are also influenced by those of population / property exposed to disasters that have been described by large variances and fat tails . despite its importance, the fat tail property and its implications in risk analysis have been far from being fully understood , although fat - tailed distributions of natural disaster and population / property have been intensively studied in statistical physics , geography , and other disciplines . in order to better understand the effect of natural disaster and population / property on damage, we devise a simple model by combining the occurrence distribution of natural disaster with population / property distributions .these distributions are assumed to have fat tails as various types of natural disasters , like earthquake and forest fire , and population / property are known to be described by fat - tailed or power - law distributions .we also take into account the tendency that population / property are spatially correlated partly due to the urbanization as there exist empirical and theoretical studies supporting such a tendency .we make additional assumptions .firstly , the disasters are moving along a straight line .secondly , the vulnerability is constant independent of the intensity of natural disaster and of exposed population / property .these assumptions can be easily relaxed to incorporate more generalized features , such as nonlinear dependence of vulnerability .our model suggests large variances and fat tails of casualty and property damage by natural disaster .we analytically solve the model for the limiting cases such that population / property are either fully uncorrelated or fully correlated in space .the more realistic , partially correlated cases are studied by numerical simulations because they are not analytically solvable . in general , the fat tail property of damage is expected to be affected by fat tail properties and spatial correlations of natural disaster and population / property . however , this is not always the case .we find that the fat tail of damage can be determined by either that of natural disaster or those of population / property , depending on which has a fatter tail than the other .the spatial correlations of population / property can enhance or reduce the fat tail property of damage , depending on how fat the tails of population / property distributions are . in order to empirically support our model, we analyze the dataset of casualties and property damages by tornadoes in the united states over 19702011 .it is confirmed that the distributions of damage show fat tails .our research has implications in the effect of large variance and fat tail of damage on risk - related decision making .if we treat the variance of damage with the assumption of normal or thin - tailed distributions as in the typical risk analysis , it may lead to inefficient social investment to reduce vulnerability and consequently the damage . as weitzman demonstrated with his dismal theorem , the fat tail property of uncertainty results in arbitrarily large or divergent expected loss , threatening the standard cost - benefit analysis .our results emphasize the need to focus more on decision under uncertainty with fat - tailed distributions .the paper is organized as following .our model is introduced in section model and its analytic and numerical results are presented in section result , with empirical analysis of damages by tornadoes . in section conclusion , we conclude our paper with some remarks .empirical findings about damages by natural disasters indicate that such damages can have large variances and be often characterized by fat - tailed or power - law distributions .the power - law distribution , e.g. , for a damage , is formally presented as where is a power - law exponent characterizing the degree of fat tail property . in such distributions ,the statistics can not be properly represented only by means due to very large or even diverging variances .the probability of extreme events is small yet nonnegligible , while the probability rapidly approaches zero for the thin - tailed cases such as exponential distributions .in general , the risk or damage by natural disaster has been analyzed as a function of three components : natural disaster , population / property exposed to the disaster , and vulnerability of those population / property . in order to account for large variance of damage, we devise a simple model by combining the occurrence distribution of natural disaster with population / property distributions , while the vulnerability is assumed to be constant .we will discuss each of three components in more detail .firstly , for population / property distributions , we consider two characteristics : probability density function ( pdf ) and spatial correlation . some pdfs of population / property , denoted by ,are known to show power - laws as with exponent .the estimated value of for the wealth of the world s richest people over 19962012 ranges from to , which can be related to pareto principle .the population distribution of cities in the united states follows a power - law with exponent , consistent with zipf s law .in addition , we consider spatial correlations of population / property because the spatial correlation can increase the variance of damage as exposed population / property are spatially concentrated due to the urbanization , such as manhattan in new york city and gangnam in seoul .areas with better accessibility may create more value , and the rents and infrastructural value of the areas could be higher .company headquarters are likely to be located in such areas .it is also likely that the neighborhood of the rich ( the poor ) is rich ( poor ) .based on these observations we assume that the pdfs of population / property are characterized by power - law distributions as , and that those population / property are spatially correlated . secondly , the nature of natural disaster is considered .it is known that the intensity of natural disaster like earthquake , storm , and forest fire follows a power - law distribution . in our work , we focus on disasters like tornadoes that move along a trajectory .since the intensity of disaster can be incorporated in modeling the vulnerability , we instead assume the length of a trajectory , denoted by , to be distributed as power - law , with exponent .for simplicity , we assume that each disaster is initiated at a random position , and moves in a random direction along a straight line of length . the assumption of straight line can be easily relaxed to consider curved trajectories or even more complicated geometry .the assumption that the initiation position is not correlated with spatial configurations of population / property seems to be strong . in reality , people can choose to live in a location with less disasters to avoid damage .in contrast , people can prefer a location with more disasters if natural phenomena related to a certain disaster can benefit people despite possible damage by such disaster .for example , coastal area is more likely to be affected by tsunamis , while it provides ports for trade and fishing .thirdly , we consider the vulnerability as a fraction of the realized damage out of each unit of population / property .it differs by variables such as wealth , building code , and network structure of infrastructure .since there are more hospitals and more labor who are devoted to control disasters in cities , cities could have less vulnerability . on the other hand, cities could be more vulnerable due to a cascading effect of damage . in our model , since the property of vulnerability is hard to measure , we assume that the vulnerability is constant through the trajectory of disaster . our model with the assumption of constant vulnerability can provide benchmark results for further realistic refinements .finally , the total damage by a natural disaster is modeled to be as the sum of population / property exposed to that disaster , multiplied by the vulnerability of those population / property . , where the probability density function of value follows a power - law as with .the height at each site represents a logarithm of the value . ]we first generate landscapes or configurations of population / property on a two - dimensional square lattice of size with a periodic boundary condition , see fig .[ fig : landscape ] .the population / property , or a value for convenience , at site is denoted by for .the pdf of value is assumed to follow a power law , with exponent . to parameterize the degree of spatial correlation of value, we define a normalized centrality as a function of value configuration : where measures the total difference between values of neighboring sites . and denote the values of for random and concentrated configurations , respectively .the zero centrality , , corresponds to the random configuration , while the maximum centrality , , implies that the values are concentrated in the central area , i.e. , around the origin .the configuration with intermediate is formulated using a simulated annealing algorithm .starting from a random configuration , two randomly selected sites swap their values only if the swapping increases the correlation .the swapping is repeated until the correlation reaches the desired value of .figure [ fig : landscape ] shows exemplary configurations of value for random ( ) , correlated ( ) , and concentrated ( ) cases . for natural disasters ,we focus on moving disasters like tornadoes that move along a trajectory .we assume that a disaster initiated at a random site moves in a random direction , i.e. , one of and directions , over the trajectory with length .the length is randomly drawn from a distribution with exponent .the vulnerability at site is assumed to be constant for all sites in the system such that for all for convenience .then , the damage by the disaster initiated at and moving sites , say in the direction of -axis , is given as the sum of values over the trajectory : is expected that the damage has a large variance by showing a fat - tailed distribution , with exponent . in general , the value of depends on exponents , , and the centrality .the case of random configurations with zero centrality can be analytically solved due to its uncorrelated nature .the damage is independent of the initiation position and moving direction of the disaster , hence it can be written as a sum of independent and identical random variables , : for small , as is mostly , i.e. , , we obtain for . for sufficiently large , if the variance of is small , one can approximate as , where denotes an average , leading to for . finally , for sufficiently large , if the variance of is large , is dominated by that is proportional to . by means of the identity , one gets for .we obtain apart from the coefficients thus for large , which is depicted in fig .[ fig : phasediagram](a ) .this solution has been also obtained by rigorous calculations . in case with and , i.e. , when the tail of value distribution is sufficiently thin , one obtains , implying that statistical properties of damage are determined only by those of disaster . in case with and , one gets , implying the dominance of statistical properties of value in deciding damage .only when both value and disaster distributions have sufficiently fat tails , i.e. , when , the fat tail of damage can be explained in terms of the interplay of both value and disaster .we perform numerical simulations on the square lattice of linear size to confirm our analysis as shown in fig .[ fig : numeric ] .since a concentrated configuration with has a rotational symmetry around the origin , it can be described simply by a function of the distance from the origin , i.e. , with .the relation has been obtained by the identity . for convenience ,we calculate in a continuum limit of lattice as where the polar coordinate and the angle are the initiation position and moving direction of the disaster , and denotes the transverse dimension or width of the disaster . since can be written in terms of and , we get ^{-\mu/2}dt.\end{aligned}\ ] ] for small , the integration is approximated up to the first order of , leading to .thus , we obtain for apart from the coefficients . for large , by substituting the variable of integration as ,one gets note that .the following formula can be used : where gives the sign of and is the hypergeometric function .we consider two cases according to the moving direction of the disaster . in case with ,i.e. , , the disaster moves away from the central area .we get the result up to the leading terms as with a constant . if ( ) , from , we have the term for .this is dominated by because for . if ( ) , leads to the term for , which is dominated by for . on the other hand , for , i.e. , , the disaster approaches the central area to some extent and eventually moves away .the domain of integration in eq .( [ eq : d_c1 ] ) can be divided into two at the closest position of the disaster to the origin given by : here the second inequality holds for sufficiently large .similarly to the case with , we get the same result up to the leading terms as eq .( [ eq : d_c1_case1 ] ) but with replaced by . finally ,since , we obtain the result for as this solution is depicted in fig .[ fig : phasediagram](b ) , and confirmed by numerical simulations as shown in fig .[ fig : numeric ] . before investigating the effect of correlated configurations with , we compare the results for random and concentrated cases , eqs .( [ eq : gamma_c0 ] , [ eq : gamma_c1 ] ) .if or , we get from for both cases of and .the first term is mainly due to when , hence it is independent of the spatial correlation or centrality .the second term is due to .that is , most disasters move along trajectories consisting of small when the tail of is sufficiently thin , i.e. , when .this leads to the irrelevance of the spatial correlation .thus , one can expect that holds for the entire range of .this is confirmed by numerical simulations for the case of in fig .[ fig : numeric](f ) , with some deviations mainly due to logarithmic corrections to scaling , like , and finite size effects .it is observed that the estimated values of are systematically smaller for larger centrality , implying fatter tails of damage distributions . for and ,the difference in values of for and for is summarized as follows : this implies that the tail of damage distribution for the concentrated case is always thinner than that for the random case .the maximum value of the difference is when .the numerical simulations for the case of in fig .[ fig : numeric](c ) confirm the analytic solution , with deviations due to corrections to scaling and finite size effects .while such deviations seem to be large , we systematically observe that in the region of , the values of for are slightly larger than those for , comparable to the analytic results .it turns out that whether the spatial correlation of value enhances or reduces the fat tail property of damage is not a simple issue as expected .the randomness in value configurations may enhance the variance of damage by introducing more fluctuations in exposed values when the tail of value distribution is sufficiently fat ( ) .on the other hand , the randomness may reduce the variance of damage by mixing the values when the tail of value distribution is sufficiently thin ( ) .the former explains the analytic expectation that the damage will have fatter tails for more correlated configurations , while the latter does the numerical observations of the opposite tendency . in order to support our results , we empirically study casualty and property damage distributions by tornadoes in the united states from 1970 to 2011 , for which the data were retrieved on 24 june 2011 from the website of national climatic data center . by assuming a power - law form for those distributions , the power - law exponents are estimated as for the numbers of death and the injured , and for property and crop damages , as shown in fig .[ fig : tornado ] .the significantly different values of power - law exponent , i.e. , and , could imply that they represent qualitatively different underlying mechanisms or origins . in order to account for these observations for tornadoes ,our simple model can be extended to take into account various factors like position - dependent vulnerability .for example , let us consider that the vulnerability at a site scales with the value at that site as , where the scaling exponent can be positive or negative depending on the situation . since the effective value , denoted by , is proportional to , the fat tail of the pdf of effective value is characterized by the power - law exponent .note that reduces to for as in our simplest setup .more detailed analysis for the effect of position - dependent vulnerability is left for future works .we have developed a simple model to show that damages by natural disasters could have large variances in terms of fat - tailed distributions of natural disaster and population / property , as well as in terms of their spatial correlations .the damage has been modeled as the sum of population / property exposed to the moving disaster , while the vulnerability was assumed to be constant through the trajectory of disaster .our simple model draws limits to the implication of the results . in reality, vulnerability differs by variables such as wealth , building code , and network structure of infrastructure .the trajectory of disaster may be not straight .however , our model can still provide the benchmark results for more realistic refinements , for which these assumptions can be easily relaxed .our research enables to quantitatively study the effect of fat tail property , in terms of the exact analytic results for the power - law exponent of damage distributions .thus , our model can serve as more concrete framework for future studies on damage by natural disaster as well as risk analysis under uncertainty with fat - tailed distributions .we also note that since a portion of damages due to climate change is associated with natural disaster , our research can provide grounds to the discussion on the fat tail property of damage due to climate change .
in order to account for large variance and fat tail of damage by natural disaster , we study a simple model by combining distributions of disaster and population / property with their spatial correlation . we assume fat - tailed or power - law distributions for disaster and population / property exposed to the disaster , and a constant vulnerability for exposed population / property . our model suggests that the fat tail property of damage can be determined either by that of disaster or by those of population / property depending on which tail is fatter . it is also found that the spatial correlations of population / property can enhance or reduce the variance of damage depending on how fat the tails of population / property are . in case of tornadoes in the united states , we show that the damage does have fat tail property . our results support that the standard cost - benefit analysis would not be reliable for social investment in vulnerability reduction and disaster prevention .
learning algorithms for deep multilayer neural networks have been known for a long time , though they usually could not outperform simpler , shallow networks . in this way, deep multilayer networks were not widely used to solve large scale real - world problems until the last decade . in 2006 , deep belief networks ( dbns ) came out as a real breakthrough in this field , since the learning algorithms proposed ended up being a feasible and practical method to train deep networks , with spectacular results .dbns have restricted boltzmann machines ( rbms ) as their building blocks .rbms are topologically constrained boltzmann machines ( bms ) with two layers , one of hidden and another of visible neurons , and no intralayer connections .this property makes working with rbms simpler than with regular bms , and in particular the stochastic computation of the log - likelihood gradient may be performed more efficiently by means of gibbs sampling . in 2002 ,the _ contrastive divergence _ ( cd ) learning algorithm was proposed as an efficient training method for product - of - expert models , from which rbms are a special case .it was observed that using cd to train rbms worked quite well in practice .this fact was important for deep learning since some authors suggested that a multilayer deep neural network is better trained when each layer is pre - trained separately as if it were a single rbm .thus , training rbms with cd and stacking up them seems to be a good way to go when designing deep learning architectures . however , the picture is not as nice as it looks , since cd is not a flawless training algorithm . despite cd being an approximation of the true log - likelihood gradient ,it is biased and it may not converge in some cases .moreover , it has been observed that cd , and variants such as persistent cd or fast persistent cd can lead to a steady decrease of the log - likelihood during learning .therefore , the risk of learning divergence imposes the requirement of a stopping criterion .there are two main methods used to decide when to stop the learning process .one is based on the monitorization of the _ reconstruction error _the other is based on the estimation of the log - likelihood with _annealed importance sampling _( ais ) .the reconstruction error is easy to compute and it has been often used in practice , though its adequacy remains unclear because of monotonicity .ais seems to work better than the reconstruction error in most cases , though it is considerably more expensive to compute , and may also fail . in this work we approach this problem from a completely different perspective .based on the fact that the energy is a continuous and smooth function of its variables , the close neighborhood of the high - probability states is expected to acquire also a significant amount of probability . in this sense , we argue that the information contained in the neighborhood of the training data is valuable , and that it can be incorporated in the learning process of rbms .in particular , we propose to use it in the monitorization of the log - likelihood of the model by means of a new quantity that depends on the information contained in the training set and its neighbors . furthermore , and in order to make it computationally tractable ,we build it in such a way that it becomes independent of the partition function of the model . in this way, we propose a neighborhood - based stopping criterion for cd and show its performance in several data sets .energy - based probabilistic models define a probability distribution from an energy function , as follows : where and stand for ( typically binary ) visible and hidden variables , respectively .the normalization factor is called partition function and reads since only is observed , one is interested in the marginal distribution but the evaluation of the partition function is computationally prohibitive since it involves an exponentially large number of terms . in this way, one can not measure directly .the energy function depends on several parameters , that are adjusted at the learning stage .this is done by maximizing the likelihood of the data . in energy - based models ,the derivative of the log - likelihood can be expressed as } \nonumber \\ { } & \ \ \ \ \ \ \ \ -\ e_{p(\xx ) } \left[e_{p(\h|\xx)}\left[\frac{\partial\text{energy}(\xx,\h)}{\partial\theta}\right ] \right ] \ , \end{aligned}\ ] ] where the first term is called the positive phase and the second term the negative phase . similar to ( [ pdf - energy - x - sumh ] ) , the exact computation of the derivative of the log - likelihood is usually unfeasible because of the negative phase in ( [ dlog - likelihood ] ) , which comes from the derivative of the partition function .restricted boltzmann machines are energy - based probabilistic models whose energy function is : rbms are at the core of dbns and other deep architectures that use rbms for unsupervised pre - training previous to the supervised step .the consequence of the particular form of the energy function is that in rbms both and factorize . in this way it is possible to compute and in one step , making it possible to perform gibbs sampling efficiently , in contrast to more general models like boltzmann machines .the most common learning algorithm for rbms uses an algorithm to estimate the derivative of the log - likelihood of a product of experts model .this algorithm is called contrastive divergence .contrastive divergence cd estimates the derivative of the log - likelihood for a given point as } \nonumber \\ { } & \ \ \ \ \ \ \ \ -\ e_{p(\h|\x_{n})}\left[\frac{\partial\text{energy}(\x_{n},\h)}{\partial\theta}\right ] \.\end{aligned}\ ] ] where is the last sample from the gibbs chain starting from obtained after steps : * * * ... * * . usually , $ ] can be easily computed .several alternatives to cd are persistent cd , fast persistent cd or parallel tempering .learning in rbms is a delicate procedure involving a lot of data processing that one seeks to perform at a reasonable speed in order to be able to handle large spaces with a huge amount of states . in doing so, drastic approximations that can only be understood in a statistically averaged sense are performed .one of the most relevant points to consider at the learning stage is to find a good way to determine whether a good solution has been found or not , and so to decide when the learning process should stop .one of the most widely used criteria for stopping is based on the monitorization of the reconstruction error , which is a measure of the capability of the network to produce an output that is consistent with the data at input . since rbms are probabilistic models , the reconstruction error of a data point is computed as the probability of given the expected value of for : \right ) \ , \ ] ] which is a probabilistic extension of the sum - of - squares reconstruction error for deterministic networks some authors have shown that , in some cases , learning induces an undesirable decrease in likelihood that goes undetected by the reconstruction error .it has been shown that the reconstruction error defined in ( [ reconstruction - error - rbm - probability ] ) usually decreases monotonically .since no increase in the reconstruction error takes place during training there is no apparent way to detect the change of behavior of the log - likelihood for cd .the proposed stopping criterion is based on the monitorization of the ratio of two quantities : the geometric average of the probabilities of the training set , and the sum of probabilities of points in a given neighbourhood of the training set .more formally , what we monitor is ^{1/n } } { { 1\over |d|}\sum_{j\in d}p(\y^{(j ) } ) } \ , \ ] ] where is a subset of points at a hamming distance from the training set less or equal than .the idea behind the definition is that the evolution of at the learning stage is expected to closely resemble that of the log - likelihood for certain values of and .for that reason we propose as the stopping criterion to find the maximum of , which will be close to the one shown by the log - likelihood of the data , as shown by the experiments in the next sections .the reason for that is twofold . on one handthe numerator and denominator monitor different things .the numerator , which is essentially the likelihood of the data , is sensitive to the accumulation of most of the probability mass by a reduced subset of the training data , a typical feature of cd . for continuity reasons ,the denominator is strongly correlated with the sum of probabilities of the training data . once the problem has been learnt , the probabilities in a close neighborhood of the training set will be high . the value of results from a delicate equilibrium between these two quantities ( see section [ experiments ] ) , which we propose to use as a stopping criterion for learning .on the other hand , due to the structure of , the partition functions involved in both the numerator and denominator cancels out , which is a necessary condition in the design of the quantity being monitorized . in other words ,the computation of can be equivalently defined as ^{1/n } } { { 1\over|d|}\sum_{j\in d}\sum_{\h}\e^{-\text{energy}(\y^{(j)},\h ) } } \ .\ ] ] the particular topology of rbms allows to compute efficiently .this fact dramatically decreases the computational cost involved in the calculation , which would otherwise become unfeasible in most real - world problems where rbms could been successfully applied . while the numerator in is directly evaluated from the data in the training set , the problem of finding suitable values for still remains .indeed , the set of points at a given hamming distance from the training set is independent of the weights and bias of the network . in this way, it can be built once at the very beginning of the process and be used as required during learning .therefore , two issues have to be sorted out before the criterion can be applied .the first one is to decide a suitable value of .experiments with different problems show that this is indeed problem dependent , as is illustrated in the experimental section below .the second one is the choice of the subset , which strongly depends on the size of the space being explored . for small spaces one can safely use the complete set of points at a distance less than or equal to , but that can be forbiddingly large in real world problems .for this reason we explore two possibilities : one including all points and another including only a random subset of the same size as the training set , which is only as expensive as dealing with the training set .we performed several experiments to explore the aforementioned criterion defined in section [ proposed - stopping - criterion ] and study the behavior of in comparison with the log - likelihood and the reconstruction error of the data in several problems .we have explored problems of a size such that the log - likelihood can be exactly evaluated and compared with the proposed parameter .the first problem , denoted _ bars and stripes _( bs ) , tries to identify vertical and horizontal lines in 4 pixel images .the training set consists in the whole set of images containing all possible horizontal or vertical lines ( but not both ) , ranging from no lines ( blank image ) to completely filled images ( black image ) , thus producing different images ( avoiding the repetition of fully back and fully white images ) out of the space of possible images with black or white pixels . the second problem , named _ labeled shifter ensemble _ ( lse ) , consists in learning 19-bit states formed as follows : given an initial 8-bit pattern , generate three new states concatenating to it the bit sequences 001 , 010 or 100 .the final 8-bit pattern of the state is the original one shifting one bit to the left if the intermediate code is 001 , copying it unchanged if the code is 010 , or shifting it one bit to the right if the code is 100 .one thus generates the training set using all possible states that can be created in this form , while the system space consists of all possible different states one can build with 19 bits .these two problems have already been explored in and are adequate in the current context since , while still large , the dimensionality of space allows for a direct monitorization of the partition function and the log - likelihood during learning . for the sake of completeness, we have also tested the proposed criterion on randomly generated problems with different space dimensions , where the number of states to be learnt is significantly smaller than the size of the space .in particular , we have generated four different data sets ( ran10 , ran12 , ran14 and ran16 ) consisting of binary input units and examples to be learnt , as suggested in . in the followingwe discuss the learning processes of these problems with binary rbms , employing the contrastive divergence algorithm cd with and as described in section [ cd ] . in the bs casethe rbm had 16 visible and 8 hidden units , while in the lse problem these numbers were 19 and 10 , respectively .for the random data sets we have used 10 hidden units in each case .every simulation was carried out for a total of 50000 epochs , with measures being taken every 50 epochs . moreover ,every point in the subsequent plots was the average of ten different simulations starting from different random values of the weights and bias .other parameters affecting the results that were changed along the analysis are the learning rates involved in the weight and bias update rules .no weight decay was used , and momentum was set to 0.8 .the learning rates were chosen in order to make sure that the log - likelihood degenerates , in such a way that it presents a clear maximum that should be detected by . in the followingwe perform two series of experiments that are reported in the next two subsections . in the first one ( section [ complete ] )we analyze the case where all states in are included . in the second one ( section [ uncomplete ] ) we relax the computational cost of the evaluation of by selecting only a small subset of all the states in . [ cols="^,^,^,^,^ " , ] despite the success of the criterion built as proposed , it is clear that for large spaces it can be unpractical if the number of states in the neighborhood of the training set is very large .for that reason , we have tested the criterion on randomly selected subsets of the same size as the training set , which is always computationally tractable . in this sense , we denote by the evaluation of on .figure [ fig_bs_lse_sales ] shows compared with from the previous figures for the bs ( first row ) and lse ( second row ) problems .more precisely , the first column shows the log - likelihood of the data along the training process , while the rest of the columns plot both and for and .notice that the absolute scales of and may vary mainly due to the value of the sum of probabilities in the denominators .however , since the precise value of these quantities is irrelevant , we have decided to scale them properly for the sake of comparison .although is built from a much smaller set than , it captures all the significant features of and can therefore be used instead of it . in this sense, provides a good stopping criterion for cd , although it is not as robust as due to the strong reduction of states contributing to as compared with those entering in .this reduction is illustrated in table [ number - of - neighbours ] , where we show the number of neighboring states to the data set at different distances for the bs and lse problems . by increasing the number of states included in , convergence to expected at the expense of an increase in computational cost .however , the present results indicate that , at least for the problems at hand , a number of examples similar to that of the training set in the evaluation of is enough to detect the maximum of the log - likelihood of the data .all the results presented up to this point show the goodness of the proposed stopping criterion for learning in cd .however , the underlying idea can be applied to different learning algorithms that try to maximize the log - likelihood of the data . in this way we have repeated all the previous experiments for cd with very similar results to the ones above . as an example, figure [ fig_lse_cd10 ] shows the log - likelihood , and with and cd for the lse data set , which is the largest one analyzed in this work . as it is clearly seen , the quality of the results is very similar to the cd case , thus stressing the robustness of the criterion . as a final remark, we note that for the bs problem the trained rbm stopped using the proposed criterion is able to qualitatively generate samples similar to those in the training set .we show in figure [ barritas ] the complete training set ( two upper rows ) and the same number of generated samples ( two lower rows ) obtained from the rbm trained with cd and stopped after 5000 epochs , around the maximum shown by , which approximately coincides with the optimal value of the log - likelihood .it is important to realize that , ultimately , the quality of the model is a direct measure of the quality of cd learning , and that the model used to generate the plots is the one with largest , which is quite close to the one with largest likelihood .in this work we have introduced the contribution of neighboring points to the training set to build a stopping criterion for learning in cd .we have shown that not only the training set but also the neighboring states contain valuable information that can be used to follow the evolution of the network along training .based on the fact that learning tries to increase the contribution of the relevant states while decreasing the contribution of the rest , continuity and smoothness of the energy function assigns more probability to states close to the training data .this is the key idea behind the proposed stopping criterion .in fact , two different but related estimators ( depending on the number of states used to compute them ) have been proposed and tested experimentally .the first one includes all states close to the training set , while the second one takes only a fraction of these states as small as the size of the training set .the first estimator is robust but may require from the use of a forbiddingly large amount of states , while the second one is always tractable and captures most of the features of the first one , thus providing a suitable stopping learning criterion .this second estimator could be used in larger data set problems , where an exact computation of the log - likelihood is not possible .additionally , the main idea of proximity to the training set will be explored in other aspects related to learning in future work .er : this research is partially funded by spanish research project tin2012 - 31377 . a. fischer and c. igel , `` empirical analysis of the divergence of gibbs sampling based learning algorithms for restricted boltzmann machines , '' in _ international conference on artificial neural networks ( icann ) _ , vol . 3 , 2010 , pp .208217 .d. e. rumelhart , g. e. hinton , and r. j. williams , `` learning internal representations by error propagation , '' in _parallel distributed processing : explorations in the microstructure of cognition , volume 1 : foundations _, d. e. rumelhart , j. l. mcclelland , and the pdp research group . ,eds.1em plus 0.5em minus 0.4emmit press , 1986 .h. lee , r. grosse , r. ranganath , and a. y. ng , `` convolutional deep belief networks for scalable unsupervised learning of hierarchical representations , '' in _ international conference on machine learning _ , 2009 , pp .609616 .q. v. le , m. a. ranzato , r. monga , m. devin , k. chen , g. s. corrado , and a. y. ng , `` building high - level features using large scale unsupervised learning , '' in _ 29th international conference on machine learning _ , 2012 .p. smolensky , `` information processing in dynamical systems : foundations of harmony theory , '' in _parallel distributed processing : explorations in the microstructure of cognition ( vol .1 ) _ , d. e. rumelhart and j. l. mcclelland , eds.1em plus 0.5em minus 0.4emmit press , 1986 , pp .194281 .s. geman and d. geman , `` stochastic relaxation , gibbs distributions , and the bayesian restoration of images , '' _ ieee transactions on pattern analysis and machine intelligence _ , vol . 6 , no . 6 , pp .721741 , 1984 .y. bengio , p. lamblin , d. popovici , and h. larochelle , `` greedy layer - wise training of deep networks , '' in _ advances in neural information processing ( nips06 ) _ , vol .19.1em plus 0.5em minus 0.4emmit press , 2007 , pp .153160 .g. desjardins , a. courville , y. bengio , p. vincent , and o. delalleau , `` parallel tempering for training of restricted boltzmann machines , '' in _ 13th international conference on artificial intelligence and statistics ( aistats ) _ , 2010 , pp . 145152 .
restricted boltzmann machines ( rbms ) are general unsupervised learning devices to ascertain generative models of data distributions . rbms are often trained using the contrastive divergence learning algorithm ( cd ) , an approximation to the gradient of the data log - likelihood . a simple reconstruction error is often used as a stopping criterion for cd , although several authors have raised doubts concerning the feasibility of this procedure . in many cases the evolution curve of the reconstruction error is monotonic while the log - likelihood is not , thus indicating that the former is not a good estimator of the optimal stopping point for learning . however , not many alternatives to the reconstruction error have been discussed in the literature . in this manuscript we investigate simple alternatives to the reconstruction error , based on the inclusion of information contained in neighboring states to the training set , as a stopping criterion for cd learning .
it is commonly believed that the observed solar and stellar variabilities have their origin in the hydromagnetic dynamos associated with turbulent convection zones .numerical studies have been made using the full magneto - hydrodynamical partial differential equations ( pde ) , which reproduce some features of solar and stellar dynamos ( e.g. gilman 1983 ) .such models are fairly complex and do not allow extensive parameter surveys . as a result , a number of alternatives to the direct integration of pde have been pursued . among thesehas been the employment of the mean field dynamo formalism ( krause & rdler 1980 ) in order to construct various types of dynamos , such as dynamo models . despite the fact that such models have been shown to be capable of producing a large number of observationally relevant modes of behaviour , ranging from stationary to chaotic ( c.f .brandenburg et al .1989a , b ; tavakol et al .1995 ) , they nevertheless involve a number of unknown features such as the exact nature of the nonlinearities involved .furthermore , in order to clarify the origin of dynamical modes of behaviour observed in dynamo models , further simplifications of these models have been considered , involving low dimensional truncations of the governing pde .such models have also been shown to be capable of producing a number of important features of stellar variability including periodic , intermittent and chaotic modes of behaviour ( zeldovich et al .1983 ; weiss et al .1984 ; feudel et al . 1993 ) .now given that these models are cheaper to integrate and more transparent to study , it would be very useful if we could employ them as diagnostic tools in order to study the effects of introducing different parametrisations and nonlinearities involved .the problem , however , is that these low dimensional models involve severe approximations , and therefore in order to be able to take the results produced by them as physically relevant , it is important that they remain robust under changes which fall within the domain of the approximations assumed .this is particularly of importance since on the basis of results from dynamical systems theory , structurally stable systems are not everywhere dense in the space of dynamical systems ( smale 1966 ) , in the sense that small changes in models can produce qualitatively important changes in their dynamics . in this waythe appropriate theoretical framework for the construction of mathematical models and the analysis of observational data may turn out to be that of structural fragility ( tavakol & ellis 1988 ; coley & tavakol 1992 ; tavakol et al .1995 ) . here as examples of such changeswe shall consider first changes in the order of truncation and then changes in the details of the physics assumed . regarding the former , a number of attempts have already been made to study the effects of increasing the truncation order on the resulting dynamics .for example , schmalz & stix ( 1991 ) ( hereafter referred to as s&s91 ) have looked at the detailed dynamics of the low dimensional truncations of the mean field dynamo equations and have studied what happens as the order of the truncation is increased , while tobias et al .( 1995 ) have employed normal form theory to construct a robust minimal third order model which exhibits both the modulation of basic cycles and chaos .these studies have shown that low dimensional models can capture a number of important dynamical features of the dynamo models .our aim in this paper is complementary to that of the above authors .we take a detailed look at the results in s&s91 and ask to what extent these results remain robust as reasonable changes are made to the details of the physics employed , and in each case we study how such changes affect the dynamical behaviour of different truncations .the starting point of the truncated dynamical models considered in s&s91 is the mean field induction equation where and are the mean magnetic field and the mean velocity , respectively .the turbulent magnetic diffusitivity and the coefficient , which relates the mean electrical current arising in helical turbulence ( the ) to the mean magnetic field , both arise from the correlation of small scale ( turbulent ) velocity and magnetic fields ( krause & rdler 1980 ) .s&s91 employ an axisymmetrical configuration with one spatial dimension , which corresponds to a latitude coordinate and a longitudinal velocity with a constant radial gradient ( the vertical shear ) .the magnetic field takes the form where is the ( latitudinal ) of the magnetic vector potential , the of and is measured in terms of the stellar radius .these assumptions allow eq .( [ induction ] ) to be split into in s&s91 , is divided into a static ( kinematic ) and a dynamic ( magnetic ) part : , with its time - dependent part satisfying an evolution equation in the form where is a damping operator and is a pseudo - scalar that is quadratic in the magnetic filed . it has been argued that the effect is quenched by the current helicity density , which in turn is governed by a dynamical equation ( kleeorin & ruzmaikin 1982 ; zeldovich et al .the reason the feedback ( quenching ) is not instantaneous is a consequence of the fact that the magnetic helicity is conserved in the absence of diffusion or boundary effects .such models have been investigated recently by kleeorin et al .( 1995 ) . in s&s91 a truncated version of yet another model was studied , in which instead of the current helicity density , the magnetic helicity density , or rather , was used .their model was motivated on heuristic grounds .bifurcation properties of a truncated version of a similar model , but with a different damping term , have been studied by feudel et al . (our present investigation is thus motivated partially by the variety of models presented in the literature .it is important to know what is the effect of the dynamical feedback and how the different representations affect the results . to proceed s&s91 specify the feedback in the following way and then look at various truncations of these equations and study what happens to the dynamical behaviour of the resulting systems as is increased . to do this it is convenient to transform these equations into a non - dimensional formthis can be done by employing a reference field , measuring time in units of and defining the following non - dimensional quantities where is the turbulent diffusivity. equations ( [ p1 ] ) , ( [ p2 ] ) and ( [ dynamicalpha ] ) with the damping operator taken to be can then be rewritten in the following non - dimensional forms : now considering the interval ( which corresponds to the full range of latitudes ) , taking the boundary conditions at and to be given by and using a spectral expansion of the form allows the set of eqs .( [ 1][3 ] ) to be transformed into the form where if is odd and otherwise and if is odd and otherwise .these rules enable the system to describe fields which are strictly symmetric ( i.e. having only components with odd and and with even ) or strictly antisymmetric ( i.e. having only components with odd and and with even ) with respect to , provided the initial conditions have either of these parities . using these equations ,s&s91 studied a number of such truncations numerically by varying the dynamo number at each truncation .their main conclusions were : 1 . with the choice of the driving term given by eq .( [ ab ] ) the antisymmetric truncation with the smallest non - trivial indices is identical with the lorenz system ( lorenz 1963 ) .2 . different truncations are capable of producing stationary , oscillatory and chaotic modes of behaviour .they also make observations about the changes in the route to chaos , and conclude that , as is increased , the route changes from period doubling to the ruelle takens newhouse scenario ( ruelle & takens 1971 ; newhouse et al .the qualitative behaviour of the truncations stabilises as the number of modes is increased and in particular for . as an examplethey observe that as is increased the limit cycles remain stable for larger dynamo numbers .they also discuss very briefly the case , observing that the case is always a stable fixed point and that for the antisymmetric limit cycle becomes unstable via a saddle node bifurcation .now , as mentioned above , there are arguments in support of both the form of the driving term as well as the damping term being different ( kleeorin et al .so as a first step , we shall study , in the next section , how robust the results in s&s91 are with respect to various physically justified changes in the driving term that have been considered in the literature in eq .( [ dynamicalpha ] ) . in sect .[ t ] we study the effects of changes in the damping term .the general physically motivated choice for the driving term is given by kleeorin & ruzmaikin ( 1982 ) , zeldovich et al .( 1983 ) and kleeorin et al .( 1995 ) to be in the form where and are constants . to study the effects of each term separately, we shall proceed by considering the cases ( ) and ( ) in the following sections .taking to be of the form , substituting for from eq .( [ bspherical ] ) and recalling that we obtain which allows eq .( [ 3 ] ) to be written as proceeding in a similar way as in previous section we obtain an identical set of differential equations to those obtained in s&s91 , except that eq .( [ main3_3 ] ) is now changed to where if is odd and otherwise .the function is clearly different from unless , in which case is also equal to zero . for this system we can study also the pure antisymmetric and symmetric solutions , but for the sake of comparison with the results in s&s91 we confined ourselves to the antisymmetric solutions .now for the case of , the eqs .( [ main3_1 ] ) , ( [ main3_2 ] ) and ( [ cn ] ) become which upon using the transformations result , as in s&s91 , in the usual lorenz equations ( lorenz 1963 ) , with the control parameters given by , , and . to be compatible with s&s91 we also used throughout in order to obtain chaotic behaviour ,for which one requires ( sparrow 1982 ) .this amounts to the expectation that relaxes much more slowly than the magnetic field . ] .since our aim is to study the qualitative effects brought about by the changes in the form of , we will not delve deeply into the details of the dynamics , such as the routes to chaos , and concentrate instead on the occurrence of equilibrium , periodic ( including quasiperiodic ) and chaotic regimes .accordingly , the tools we employ are the time series and the spectra of lyapunov exponents .the latter is particularly useful as a relatively sensitive tool to characterise the dynamics , with the lyapunov spectra of the types , , and corresponding to equilibrium , periodic , quasiperiodic ( with two periods ) and chaotic regimes respectively . also to keep the numerical costs reasonable ,the resolution of in all the figures was , unless stated otherwise , taken to be .now given the fact that in many astrophysical settings ( including that of the sun ) the sign of the dynamo number is not known , we shall also study the effects of changes in the sign of .we note also that the dynamo concept becomes invalid if exceeds a certain limit ( choudhuri 1990 ) .furthermore , in general , as is increased more modes ( higher ) are required to achieve convergence ( numerically bounded solutions ) . for the sake of comparison with s&s91, we studied the dynamics of the system ( [ main3_1 ] , [ main3_2 ] , [ cn ] ) , for different values of the truncation order .a summary of our numerical results is given in fig .[ jb_plus ] which is a plot of the two largest lyapunov exponents as a function of the dynamo number for different truncations . in the following figures ,the largest lyapunov exponent is depicted by a solid line and its negative , zero and positive values indicate equilibrium , periodic and chaotic regimes .the simultaneous vanishing of the second lyapunov exponent would imply the presence of quasiperiodic motion with two frequencies ( i.e. motion on a 2torus ) .it was not necessary to plot the third exponent , since no motion on or higher dimensional tori was observed which is not surprising in view of the results of newhouse et al .( 1978 ) . #1#20.33#1 for a more transparent comparison , we have also produced in fig .[ ab_plus ] an analogous figure for the system considered in s&s91 . # 1#20.33#1 as the comparison of the figs .[ jb_plus ] and [ ab_plus ] shows , the main differences produced by the replacement of by are as follows : 1 .the chaotic regimes become less likely in the case , in the sense that the intervals of the dynamo number over which the system is chaotic decrease dramatically .2 . there exist indications for the presence of `` multiple attractors '' over substantial intervals of , consisting of equilibrium and periodic states .these can be seen as regions of spiky behaviour in the solid line in fig .[ jb_plus ] , for certain truncations ( ) .the behaviour of the system alternates between fixed point solutions ( where all exponents are negative ) and periodic orbits ( where only the first one is zero ) as the dynamo number is slightly changed .+ the presence of such behaviour is potentially of great interest since it suggests that there exist intervals of in which small changes in can drastically change the behaviour of the system .this is also interesting , if one considers settings in which or the initial conditions ( ic ) can vary slightly , but randomly , as the resulting behaviour would look very much like intermittency . to highlight thiswe have plotted in fig .[ fragility ] the behaviour of the truncation as a function of small changes in the dynamo number and the ic .as can be seen , small changes in either or ic can produce important changes in the behaviour of the system .this therefore shows that there are substantial regions of over which the behaviour of the system is sensitive to small changes in and ic .further , we have checked that this fragility is itself robust in the sense that taking a finer mesh of does not qualitatively change this overall behaviour .+ # 1#20.33#1 3 . regarding the overall behaviour of the systems with respect to increases in , we observe the following .for small dynamo numbers , the behaviour seems to settle down to equilibrium and periodic states as is increased .for example as can be seen from fig .[ jb_plus ] , for dynamo numbers up to , the behaviour settles down for . for larger values of , however , we observe an increase in the dominance of the `` multiple attractor '' regime for the values of considered here .it is likely , however , that with increasing , these intervals only establish themselves at higher values of .4 . the transition to chaos appears to be very abrupt in the case , with the system going from a fixed point into a chaotic regime very rapidly , at least to within a resolution of , with no intermediate behaviour being observed . for the case system goes from a fixed point limit cycle chaos . for still higher , our calculations indicate that chaos becomes scarce .chaotic regions were also found in the `` multiple attractors '' region , which were fragile with respect to small changes in the ic and the choice of .our results for the negative dynamo numbers are shown in fig .[ jb_minus ] . also , in view of the sparseness of the results reported in s&s91 for the models with negative dynamo numbers , we present fig .[ ab_minus ] as an analogous figure for their case .# 1#20.33#1 # 1#20.33#1 the main features of these models are : 1 .the chaotic regimes seem to become less likely in the case .in fact , for the mesh size in taken here , we only observed chaotic solutions in the case of and then only for very high dynamo numbers . 2 .there are substantial intervals ( in ) of `` multiple attractors '' ( consisting of equilibrium and periodic states ) for the case .3 . in both casesthe behaviour for stabilises as is increased .this occurs for for the equilibrium regime and for periodic regime .these results also indicate that there are parallels between the case with negative dynamo numbers and the case with positive dynamo numbers . in both cases ,multiple attractor regions seem to dominate for large values , as is increased .4 . for high in the case ,the transition is from to chaotic behaviour .this does not seem not true for where the chaotic behaviour seems to appear abruptly .to study the effects of including the term , we use the dynamic equation from kleeorin & ruzmaikin ( 1982 ) without a damping term . ]proportional to where is the combined ( turbulent plus ohmic ) diffusion of the field , the density of the medium and the magnetic constant . now using expression ( [ bspherical ] ) for and turning the system in a non - dimensional form using the same transformations as before ,we obtain where is a dimensionless constant .this allows the analogue of the eq .( [ 3 ] ) to be written in the form where and are dimensionless constants .considering the same boundary conditions and spectral expansions as in the case , eq .( [ alphab2 ] ) becomes where s are given by ,\nonumber\\ h_4(n , m , l , k)&= & \frac{1}{4 } \left[\delta(m - n , l - k)+\delta(m - n , l+k)\right.\\ & & \left.-\delta(m+n , l - k)-\delta(m+n , l+k)\right].\nonumber\end{aligned}\ ] ] note that is 1 if but 2 if and if is even .our results of the study of the system ( [ main3_1 ] ) , ( [ main3_2 ] ) and ( [ cn_2 ] ) for positive dynamo numbers are depicted in table [ tablealpha1 ] . as can be seen , the effect of the inclusion of the term is dramatic and seems to eliminate the possibility of chaotic behaviour for all .ccc & & + 2 & & + 3 & & + 4 & & + 5 & & + 6 & & + 7 & & + 8 & & + for the lower truncations of and , we only observe fixed point solutions for all up to . for higher order truncations , with moderate ,there is a sequence of fixed points followed by stable periodic cycles .the corresponding results for the negative dynamo numbers are shown in the table [ tablealpha2 ] , and again this is very similar to table [ tablealpha1 ] with no evidence for chaotic behaviour at small and moderate . in this casethe system has the origin as the fixed point for down to .ccc & & + 2 & & + 3 & & + 4 & & + 5 & & + 6 & & + 7 & & + 8 & & +in this section we employ the equation proposed by kleeorin et al .( 1995 ) in the form as the evolutionary equation for the back reaction of the magnetic field on the time dependent part of . in the above equation is the characteristic time on which the small scale magnetic helicity changes , which is typically much longer than the turbulent diffusion time scale . using the same expression for from eq .( [ bspherical ] ) and proceeding in the same way as in the previous cases we obtain the differential equations for to be where is a dimensionless constant .our results of the study of the system ( [ main3_1 ] ) , ( [ main3_2 ] ) and ( [ cn_t ] ) are shown in tables [ tablet1 ] and [ tablet2 ] .although more modes are required in order to obtain convergence for higher dynamo numbers , the results shown in table [ tablet1 ] and [ tablet2 ] seem to indicate that this type of change in the damping term does not produce qualitative changes in the behaviour of the system .this is reasonable , since the functional forms of the modal equations are quite similar in eqs .( [ cn_2 ] ) and ( [ cn_t ] ) .the inclusion of the term does not change the qualitative behaviour of the smaller truncations ( and for and for , where we observe only fixed points as before ) . at moderate dynamo numbers ,the qualitative behaviour is almost the same and remains periodic for , but is changed slightly .ccc & & + 2 & 10 & + 3 & 15 & + 4 & & + 5 & & + 6 & & + 7 & & + 8 & & + ccc & & + 2 & & + 3 & & + 4 & & + 5 & & + 6 & & + 7 & & + 8 & & + we also note that all systems considered here , in particular cases ( ii ) and ( iii ) , have a common pattern of behaviour , namely that as is increased , and oscillate with slowly increasing amplitudes about zero . on the other hand, oscillates with an increasing amplitude around a rapidly increasing average .also if , oscillates about a positive average and about a negative average for .we have studied the robustness of truncated dynamos including a dynamic equation , with respect to physically motivated changes in the driving term and a change in the damping term appearing in the dynamical equation .we studied these systems with respect to changes in the dynamo number , the truncation order and the ic .our results show that the changes in the driving term have important effects on the dynamical behaviour of the resulting systems .in particular we find that * chaos is much less likely in systems with a driving term of the form ( with positive ) , as opposed to those involving . *the inclusion of the term has a dramatic effect in that it suppresses the possibility of chaotic behaviour at moderate dynamo numbers .* changes in the sign of the dynamo number can also produce important changes .in the case where the driving term is given by , using makes chaotic behaviour much less likely ( which seems to be the mirror image of the case where the driving term given by and ) . * in case ( i ) there exists substantial intervals of for which the systems seem to possess `` multiple attractors '' ( consisting of equilibrium and periodic states ) . as a result small changes in either or the ic can produce important changes in these regimes .this form of fragility can be of importance , especially in presence of noise , where the system would behave in an intermittent way . finally to recapitulate our motivation for studying different formulations of dynamic feedback , we note that even the usual expression for the driving term , , derived from first principles could still be inappropriate , as it involves uncontrolled approximations. however , it is clear that has to be a pseudo - scalar ( because is a pseudo - scalar ) , and the most obvious possibilities are indeed the ones that we have studied .our investigations have shown that the actual choice can significantly alter the overall conclusion .therefore , all conclusions , especially those concerning the occurrence of chaos , should be taken with utmost care .ec is supported by grant bd / 5708 / 95 program praxis xxi , from jnict portugal .rt benefited from serc uk grantthis research also benefited from the ec human capital and mobility ( networks ) grant `` late type stars : activity , magnetism , turbulence '' no .
we investigate the behaviour of dynamos with a dynamic , whose evolution is governed by the imbalance between a driving and a damping term . we focus on truncated versions of such dynamo models which are often studied in connection with solar and stellar variability . given the approximate nature of such models , it is important to study how robust they are with respect to reasonable changes in the formulation of the driving and damping terms . for each case , we also study the effects of changes of the dynamo number and its sign , the truncation order and initial conditions . our results show that changes in the formulation of the driving term have important consequences for the dynamical behaviour of such systems , with the detailed nature of these effects depending crucially on the form of the driving term assumed , the value and the sign of the dynamo number and the initial conditions . on the other hand , the change in the damping term considered here seems to produce little qualitative effect .
let me start with a citation from the oxford english dictionary : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * teleportation*. _ psychics _ and _ science fiction_. the conveyance of persons ( esp . of oneself ) or things by psychic power ; also in futuristic description , apparently instantaneous transportation of persons , etc ., across space by advanced technological means ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ recently , the word `` teleportation '' has appeared outside of the realm of mystical and science fiction literature : in science journals .bennett , brassard , crepeau , jozsa , peres , and wootters ( bbcjpw ) proposed a gedanken experiment they termed `` quantum teleportation '' . classically , to move a person is to move all the particles it is made of .however , in quantum theory particles themselves do not represent a person : all objects are made of the same elementary particles .an electron in my body is identical to an electron in the paper of the page you are reading now .an object is characterized by the _ quantum state _ of the particles it is made of .thus , reconstructing the quantum state of these particles on other particles of the same kind at a remote location _ is _ `` transportation '' of the object .the quantum state of the object to be transported is supposed to be unknown . indeed , usually we do not know and can not find out what the quantum state of an object is .moreover , frequently an object is not in a pure quantum state , its particles may be correlated to other systems . in such cases the essence of the object is these correlations . in order to transport such correlations ( even if they are known ) , without access to the systems which are in correlation with our system , a method for teleportation of an unknown quantum state is necessary .quantum teleportation transfers the quantum state of a system and its correlations to another system . moreover , the procedure corresponds to the modern meaning of teleportation : an object is disintegrated in one place and a perfect replica appears at another site .the object or its complete description is never located between the two sites during the transportation .note that `` disintegration '' of the quantum state is a necessary requirement due to the no - cloning theorem .the teleportation procedure , apart from preparing in advance the quantum channels , requires telegraphing surprisingly small amounts of information between the two sites .this stage prevents `` instantaneous '' transportation . indeed , because of special relativity , we can not hope to achieve superluminal teleportation : objects carry signals . due to the arguments presented above, i find the bbcjpw procedure to be very close to the concept of `` teleportation '' as it is used in the science - fiction literature .however , the name teleportation is less justified for the recent implementations of this idea in the laboratory , as well as for some other proposals for experiments . for me, an experiment deserves the name `` teleportation '' if i can give to alice ( the sender ) a system whose quantum state in unknown to her and that she can , without moving this system and without moving any other system which can carry the quantum state of the system , transport this state to bob ( the receiver ) which is located at a remote location . in the next sectioni shall discuss , in the light of my definition , the usage of the word `` teleportation '' .what i discuss in this section is essentially a semantic issue , but i feel that its clarification is important .i find the original teleportation paper to be one of the most important results in the field in the last ten years , and i think that it should be clearly distinguished from other interesting but less profound achievements .recently i heard the word `` teleportation '' in the context of nmr - type quantum computation experiments . using certain pulses , a spin state of a nucleus in a large moleculeis transported to another nucleus in the same molecule .the main deficiency of this experiment as teleportation is that it does not allow to transport an _ unknown _ quantum state .indeed , in the nmr experiments a macroscopic number of molecules have to be in a particular quantum state .if alice receives a single quantum object in an unknown quantum state , she can not duplicate it and in that manner prepare many copies in many molecules , due to the no - cloning theorem .an apparent weakness of the nmr experiment is that the internal coupling which plays the role of the channel for classical information required for teleportation _ can _ , in principle , carry the quantum state .however , due to the strong interaction with the environment , the quantum state transmitted through such a channel is effectively measured by the environment .only the eigenstates corresponding to the classical outcomes are stable under this interaction and , therefore , there is good reason to consider this channel to be classical .another place in which i encountered the word `` teleportation '' is the work on optical simulation of quantum computation .it includes a proposal for implementation of the idea to view `` teleportation '' as a particular quantum computation circuit .the problem in the optical experiment is that instead of the classical channel which is supposed to transmit two bits of information , real photons are moving from alice to bob and these photons _ can _ transmit the whole quantum state of the polarization degree of freedom of the photon .this is exactly the apparent weakness of the nmr - teleportation experiments mentioned above , but in the present case the environment does not make the quantum channel to be effectively classical .note that in the original proposal the quantum channel is explicitly replaced by a classical one to make the proposal akin to teleportation in the bbcjpw sense .it is the optical simulation of this proposal which is something less than teleportation .it seems that the authors were aware of this problem when they added a footnote : `` the term teleportation is used in the literature to refer to the transfer of the state of a qubit to another '' .i find this meaning to be too general .many processes corresponding to this definition were proposed ( and even implemented in laboratories ) long before the teleportation paper has appeared .next , let me discuss `` teleportation '' in the rome experiment .as i will explain in the next section , the main obstacle for successful reliable teleportation is the experimental difficulty to make one quantum object interact with another . in optical experiments quantum objects , photons ,interact with classical objects such as beam splitters , detectors , etc .popescu proposed a very elegant solution : two degrees of freedom of a _ single _ photon do interact effectively one with the other .this idea was successfully implemented in the rome experiment in which the polarization state of the photon was transported to another photon . however , the weakness of this experiment is that the quantum state to be teleported has to be the state of ( the second degree of freedom of ) one of the members of the epr pair which constitute the quantum channel of the teleportation experiment .therefore , this method can not be used for teleportation of an unknown quantum state of an external system .the authors view this experiment as `` teleportation '' because after the preparation alice can not find out the quantum state , which , nevertheless , is transported ( always and with high fidelity ) to bob .finally , let me discuss the innsbruck teleportation experiment .although the word `` teleportation '' appears in the title of the first letter , the second experiment is a much better demonstration of teleportation .i believe that the innsbruck experiment deserves the name teleportation .it showed for the first time that an unknown state of an external photon can be teleported .it is not a reliable teleportation : the experiment has a theoretical success rate of 25% only , and the employed methods can not , in principle , lead to reliable teleportation . for a system consisting of the probability of successful teleportation is exponentially small . recently , braunstein and kimble pointed out a weak point of the innsbruck experiment . in the current version of the experiment one might know that teleportation has been successful only after the time bob detects ( and , therefore , destructs ) the photon with the teleported state .thus , the name given by braunstein and kimble for the innsbruck experiment : `` a posteriori teleportation '' appears to be appropriate .however , as mentioned in the reply and in the comment itself , it is feasible to solve this problem by a modification of the experiment and therefore it is not a conceptual difficulty .another possible improvement of the demonstration of teleportation in the innsbruck experiment is using single input photons . in the current version of the experiment ,the polarizer which controls the input quantum state is stationary , and , therefore , many photons are created in the same state .thus , this state can hardly be considered an `` unknown '' quantum state .low intensity of the input beam and frequent changing of the angle of the polarizer is a simple and effective solution of the problem .an ideal solution is using a `` single - photon gun '' which creates single - photon states .apart from the impossibility of performing a measurement of the nondegenerate bell operator , there is another problem for achieving reliable teleportation of an unknown state of a single photon .today , there is no source which creates a single epr pair at will , something frequently called an `` event - ready '' source .the second innsbruck experiment is the best achievement in this direction : entanglement swapping may be viewed as creation of an entangled pair at the moment of the coincidence detection of the two photons coming from the beam splitter .what is missing is a `` sophisticated detection procedure '' which rules out the creation of two pairs in a single crystal .the original bbcjpw teleportation procedure consists of three main stages : ( i ) preparation of an epr pair , ( ii ) bell - operator measurement performed on the `` input '' particle and one particle of the epr pair , ( iii ) transmission of the outcome of the bell measurement and appropriate unitary operation on the second particle of the epr pair ( the `` output '' particle ) . completing ( i)-(iii ) ensures transportation of the pure state of the input particle to the output particle .it also ensures transportation of correlations : if the input particle were correlated to other systems , then the output particle ends up correlated to these systems in the same way .the main difficulty in this procedure is performing the bell measurement .recently it has been proved that without `` quantum - quantum '' interaction one can not perform measurement of the nondegenerate bell operator which is required for reliable teleportation . using only `` quantum - classical '' interactions one can perform a measurement of a degenerate bell operator , thus allowing a teleportation which succeeds sometimes .the size limitations of this paper allow only to outline the proof . in order to prove that it is impossible to perform complete ( nondegenerate ) bell - operator measurements without using interactions between quantum systems ,i assume that any unitary transformation of single - particle states and any local single - particle measurement are allowed .there are four distinct ( orthogonal ) single - particle states involved in the definition of the bell states : two channels , and a two - level system which enters into each channel .we name the channels left ( l ) and right ( r ) , corresponding to the way the bell states are written : where is a set of orthogonal single - particle local states .the `` linearity '' implies that the evolution of the particle in one channel is independent on the state of the particle in another channel and , therefore , eq .( [ basic ] ) is enough to define the evolution of the bell states : proper symmetrization is required for identical particles .i assume that there are only local detectors and , therefore , only product states ( and not their superpositions ) can be detected .measurability of the non - degenerate bell operator means that there is at least one nonzero coefficient of every kind and if , for a certain , it is not zero , then all others are zero .this observation leads to numerous equations which , after some tedious algebra , yield the desired proof .a somewhat different approach was taken in proof .this proof considers only photons , but it proves the impossibility of non - degenerate bell - measurements for a more general case in which measurements in two stages are allowed .the procedure in which the choice of the measurements in the second stage depends on the results of the measurements in the first stage is an indirect quantum - quantum interaction : the state of one quantum system influences the result of the first measurement and the action on the second quantum system depends on this result .if we allow direct quantum - quantum interactions , we can achieve reliable ( theoretically 100% efficient ) teleportation . in this case, we can perform a measurement of the non - degenerate bell operator .indeed , a quantum - quantum interaction such as a conditional spin - flip transforms the bell states into product states which then can be measured using single - particle measuring devices .an alternative method of teleportation is based on _ nonlocal _ measurements `` crossed '' in space - time . in order to teleport a quantum state from particle 1 to particle 2 and , at the same time , the quantum state of particle 2 to particle 1, the following ( nonlocal in space - time ) variables should be measured ( see fig .1 ) : for any set of outcomes of the nonlocal measurements ( [ swap ] ) the spin state is teleported ; in some cases the state is rotated by around one of the axes , but the resulting rotation can be inferred from the nonlocal measurements . in order to perform nonlocal measurements ( [ swap ] ) , correlated pairs of auxiliary particles located at the sites of particle 1 and 2 are required . for completing the whole procedure we need two singlets instead of the one required in the original teleportation procedure .the reason for requiring more resources is that two - way ( rather than one - way ) teleportation is achieved .space - time locations of local couplings are shown .when the nonlocal measurements ( [ swap ] ) are completed , the states of the two particles are interchanged up to local rotations , signifies `` rotated '' .due to the lack of an effective photon - photon interaction , the currently available methods do not allow reliable teleportation of the photon polarization state .it seems that the most promising candidates for teleportation experiments which might have 100% success rate are proposals which involve atoms and electro - magnetic cavities .first suggestions for such experiments were made shortly after publication of the original teleportation paper and numerous modifications appeared since .the implementation of these proposals seems to be feasible because of the existence of the `` quantum - quantum '' interaction between the system carrying the quantum state and a system forming the epr pair .a dispersive interaction ( di ) of a rydberg atom passing through a properly tuned micro - wave cavity leads to a conditional phase flip depending on the presence of a photon in the cavity .a resonant interaction ( ri- ) between the rydberg atom and the cavity allows swapping of quantum states of the atom and the cavity .thus , manipulation of the quantum state of the cavity can be achieved via manipulation of the state of the rydberg atom .the atom s state is transformed by sending it through an appropriately tuned microwave zone . moreover ,the direct analog of conditional spin - flip the interaction can be achieved through the raman atom - cavity interaction .no teleportation experiment has been performed as of yet using these methods , but it seems that the technology is not too far from this goal .recent experiments on atom - cavity interactions teach us about the progress in this direction . until further progress in technologyis achieved , it is not easy to predict which proposal will be implemented first . assuming that resonant atom - cavity interactions can be performed with very good precision and that a dispersive interaction is available with a reasonable precision, it seems that the following is the simplest proposal , see fig . 2 .the quantum channel consists of a cavity and a rydberg atom in a correlated state . a particular resonant interaction , ri- , of an excited atom passing through an empty cavity , + ( a ) preparation of the quantum channel .an atom undergoes resonant interaction ri-/2 with the cavity and moves to a remote site .+ ( b ) the atom , carrying the quantum state to be teleported , interacts with the cavity dispersively and its state is measured .+ ( c ) the state of the cavity is measured using an auxiliary atom ..4 cm the bell states ( [ bell - ca ] ) have the form of eq .[ bell ] when the first in the product is identified with , the second , with latexmath:[$(1/\sqrt2)(|0\rangle + completes the bell measurement procedure . in order to make the measurement of the cavity state we perform another resonant interaction , ri- , between the cavity and an auxiliary atom prepared initially in the ground state ( fig .2c ) , this interaction transfers the quantum state of the cavity to this atom .the final measurements on the atoms distinguish between the states and , the states of the atoms are rotated while passing through the appropriate microwave zones before detection .when the bell measurement is completed , the quantum state is teleported up to the known local transformation determined by the results of the bell measurement .( this final local transformation is not shown in fig .one relatively simple method for `` two - way '' teleportation of atomic states is a direct implementation of the crossed nonlocal measurement scheme presented in the previous section .this method is described in ref . .one difficulty with the teleportation of atomic states is that usually experiments are performed with atomic _ beams _ and not with individual atoms .such experiments might be good for demonstration and studying experimental difficulties of teleportation , but they can not be considered as implementation of the original wisdom of teleportation or used for cryptographic purposes .in fact , optical experiments have this difficulty too , unless `` single - photon guns '' will be used . both foratomic and for optical experiments this difficulty does not seem to be unsolvable , but it certainly brings attention to experiments with trapped ions .there are many similarities between available manipulations with atoms and with ions , so the methods discussed above might be implemented for ion systems too .note also another recent proposal for teleportation using quantum - quantum interaction .it is based on rotation of the photon polarization due to presence of a single chiral molecule in an optical cavity .i am , however , skeptical about the feasibility of such experiment due to difficulties in tuning the interferometer in which photons undergo multiple reflections in the cavity ; the number of reflections has to be very large due to weakness of the interaction between the molecule and the photon .in the framework of nonlocal measurements there is a natural way of extending the teleportation scheme to systems with continuous variables .consider two similar systems located far away from each other and described by continuous variables and with corresponding conjugate momenta and . in order to teleport the quantum state of the first particle to the second particle ( and the state of the second particle to the first ) we perform the following `` crossed '' nonlocal measurements ( see fig .3 ) , obtaining the outcomes and : in ref . it is shown that these nonlocal `` crossed '' measurements `` swap '' the quantum states of the two particles up to the known shifts in and .indeed , the states of the particles after completion of the measurements ( [ cross - conti ] ) are .4 cm space - time locations of local couplings are shown .when the nonlocal measurements ( [ cross - conti ] ) are completed , the states of the two particles are interchanged up to the known shifts in and . the state of particle 2 after is the initial state of the particle 1 shifted by in and by in .similarly , the state of particle 1 is the initial state of particle 2 shifted by in and by in . after transmitting the results of the local measurements , and ,the shifts can be corrected ( even if the quantum state is unknown ) by appropriate kicks and back shifts , thus completing a reliable teleportation of the state to and of the state to .surprisingly , the implementation of the reliable teleportation of continuous variables is possible .braunstein and kimble made a realistic proposal for teleporting the quantum state of a single mode of the electro - magnetic field .this remarkable result is an implementation of a variation of the scheme described above which achieves a one - way teleportation . in their method is `` ''defined for a single mode of an electro - magnetic field , and correspondingly is the conjugate momentum of .the analog of the epr pair is obtained by shining squeezed light with a certain from one side and squeezed light with a certain from the other side onto a simple beam splitter .the analog of the local bell measurement is achieved using another beam splitter and homodyne detectors .the shifts in and which complete the teleportation procedure can be done by combining the output field with the coherent state of appropriate amplitude fixed by the results of the homodyne measurements .note also a related proposal for teleporting a single - photon wave packet .very recently the braunstein - kimble proposal for implementation of continues variable teleportation has been performed in california institute of technology .this is the first reliable teleportation experiment .the meaning of `` reliable '' ( `` unconditional '' in ) is that theoretically it is always successful .it is the first experiment in which the final stage of teleportation , i.e. , transmission of the classical information to bob and the appropriate transformation which results in the appearance of the teleported state in the bob s site , has been implemented .the weakness of this experiment is that the teleported state is significantly distorted .the main reason for low fidelity is the degree of squeezing of the light which controls the quality of the epr pairs , the quantum channel of the teleportation .significant improvement of the squeezing parameters is a very difficult technological problem .thus , in this type of experiment one can not reach the high fidelity of ( conditional ) teleportation experiments of photon polarization states .one may note an apparent contradiction between the proof of section [ proof ] that 100% efficient teleportation can not be achieved using linear elements and single - particle state detectors and the successful _reliable _ teleportation experiment of the state of the electro - magnetic field which involved only beam - splitters and local measuring devices reported above .indeed , it is natural to assume that if reliable teleportation of a quantum state of a two - level system is impossible under certain circumstances , it should certainly be impossible for quantum states of systems with continuous variables .however , although it is not immediately obvious , the circumstances are very different .there are numerous differences .the analog of the bell operator for continuous variables does not have among its eigenvalues four states of the general form ( [ bell ] ) where and signify some orthogonal states .another problem is that one can not identify `` the particles '' : in the beam - splitter _one _ input port goes to _ two _ output ports .one can see a `` quantum - quantum '' interaction : the variable of one of the output ports of the beam splitter becomes equal to , essentially , the sum of the quantum variables of the input ports .the absence of such `` quantum - quantum '' interactions is an essential ingredient in the proof of section [ proof ] . if , however , we consider the `` particles '' to be photons ( which do not interact with one another ) then the homodyne detectors which measure are not single - particle detectors another constraint used in the proof . note also that the braunstein - kimble method is not applicable directly for teleporting where is a spatial position of a quantum system .an additional quantum - quantum interaction which converts the continuous variable of position of a particle to the variable of the electro - magnetic mode is required .my complaints about the ( mis)interpretation of the word `` teleportation '' in section ii shows that i am ( over)sensitive about this issue .this is because i was thinking a lot about it , resolving for myself a paradox which i , as a believer in the many - worlds interpretation ( mwi ) had with this experiment .consider teleportation , say in the bbcjpw scheme .we perform some action in one place and the state is immediately teleported , up a local transformation ( `` rotation '' ) , to an arbitrary distant location .but relativity theory teaches us that anything which is physically significant can not move faster than light .thus it seems that it is the classical information ( which can not be transmitted with superluminal velocity ) about the kind of back `` rotation '' to be performed for completing the teleportation which is the only essential part of the quantum state .however , the amount of the required classical information is very small .is the essence of a state of a spin-1/2 particle just 2 bits ?i tend to attach a lot of physical meaning to a quantum state . for me , a proponent of the mwi , everything is a quantum state .but i also believe in relativistic invariance , so only entities which can not move faster than light have physical reality .thus , teleportation poses a serious problem to my attitude .i was ready to admit that `` i '' am just a quantum state of particles .this is still a very rich structure : a complex function on .but now i am forced to believe that `` i '' am just a point in the ? !the resolution which i found for myself is as follows : in the framework of the mwi , the teleportation procedure does not move the quantum state : the state was , in some sense , in the remote location from the beginning . the correlated pair , which is the necessary item for teleportation , incorporates all possible quantum states of the remote particle , and , in particular , the state which has to be teleported . the local measurement of the teleportation procedure splits the world in such a manner that in each of the worlds the state of the remote particle differs form the state by some known transformation .the number of such worlds is relatively small .this explains why the information which has to be transmitted for teleportation of a quantum state the information which world we need to split into , i.e. , what transformation has to be applied is much smaller than the information which is needed for the creation of such a state .for example , for the case of a spin-1/2 particle there are only 4 different worlds , so in order to teleport the state we have to transmit just 2 bits . as for teleporting myself , the number of worlds is the number of distinguishable ( using measuring devices and our senses ) values of and for all continues degrees of freedom of my body .teleportation of people will remain a dream for the foreseeable future .first , we have to achieve the reliable teleportation of an unknown quantum state of an external system with reasonable fidelity which is also only a dream today .although the teleportation of an unknown quantum state has not yet been achieved , the current experiments clearly demonstrate that it can be done .i urge the experimenters to perform a persuasive teleportation experiment : carol gives to alice ( single ) particles in different states ( unknown to alice ) , alice teleports the states to bob , bob gives them back to carol who tests that what she gets is what she has sent before .i am grateful for very useful correspondence with chris adami , gilles brassard , samuel braunstein , john calsamiglia , lior goldenberg , daniel lidar , sergey molotkov , harald weinfurter , asher peres , sandu popescu , and anton zeilinger .the research was supported in part by grant 471/98 of the basic research foundation ( administered by the israel academy of sciences and humanities ) .
since its discovery in 1993 , we witness an intensive theoretical and experimental effort centered on teleportation . very recently it was claimed in the press that `` quantum teleportation has been achieved in the laboratory '' ( t. sudbery , _ nature _ * 390 * , 551 ) . here , i briefly review this research focusing on the connection to _ nonlocal measurements _ , and question sudbery s statement . a philosophical inquiry about the paradoxical meaning of teleportation in the framework of the many - worlds interpretation is added . 15.2 pt _ school of physics and astronomy + raymond and beverly sackler faculty of exact sciences + tel aviv university , tel - aviv 69978 , israel . + _
the dynamics of many complex systems , not only in natural sciences but in economical and social contexts as well , is usually presented in the form of time series .these series are frequently separated by random events which , in spite of their randomness , show some structure and apparent universal features . during the last few yearsthere have been endeavors to explain the sort of actions involved in interhuman communication . according to this framework , decisions are taken based on a queuing process and are aimed to be valid for a wide range of phenomena such as correspondence among people , the consecutive visits of a web portal or even transactions and trading in financial markets .the main conclusion of these studies is that , in order to reproduce the empirical observations as well as to give reason of the heterogeneous nature of outgoing tasks , the timing decision has to adopt a rule of non - trivial priority . otherwise , the implementation of , for instance , the simple rule : `` first - in - first - out '' leads to poissonian timing between consecutive outgoing events and this seems to deviate from many empirical observations .one convenient frame to approach these phenomena is provided by the continuous time random walk ( ctrw ) . within this frame oneis basically concerned with the appropriate description of , the so - called pausing - time density ( ptd ) , which gives the probability of having a certain time interval between two consecutive events .many empirical ptd s present long - tailed profiles suggesting a self - similar hierarchy in the entire probability distribution . followingthis indication some authors claim that the slow decay of the ptd obeys a power - law whose exponent is almost universal in the sense that it seems to adopt only two different values and . in the next sectionwe will present a simple approach which gives a power - law reproducing these exponents .besides the ptd which doubtlessly provides maximal information on interevent statistics , the deep structure of the fractal hierarchy is perhaps more easily unveiled by looking at the -moments of the interevent times instead of solely observing the ptd tails .one is thus able to answer questions such as whether the process is monofractal or multifractal and if there eventually exist different regimes depending on the value of ( the order of the -moment ) .this information obtained from data can afterwards guide us to find out the main ingredients of a more refined theoretical model for human decision dynamics .this is certainly the chief motivation of this work .herein we propose an alternative framework to the existing ones which are basically based on queuing processes but that it still considers the heterogeneous nature of the executed tasks . within our approachit is possible to deal with analytical expressions , not only simulations , and we believe we provide good tools to describe the more subtle structure arisen from -moments .the approach we propose has its roots in physics and is reminiscent of mixture of distributions hypothesis in finance that can be traced back to the 1970s , the variational principle of energy dissipation distributions at different timescales in turbulence in the 1990s , the superstatistics and nonextensive entropy .in fact , the ptd was first introduced within the ctrw model which was originally established by montroll and weiss . under this very generalsetting , the present development has been inspired by the work of scher and montroll who in 1975 proposed the so - called `` valley model '' to describe the power - law relaxation of photocurrents created in amorphous ( glossy ) materials .we shall use the same idea but in a completely different background .the paper is organized as follows . in sect .[ sec2 ] we present the fundamentals of scher and montroll s model and apply it to explain the emergence of long - tailed distributions in the pausing - time statistics . in sect .[ sec3 ] we address the question of the moments of the interevent times and obtain the conditions for the multifractal behavior of such moments . in sect .[ sec4 ] we test multifractality on large financial data sets .conclusions are drawn in sect . [ sec5 ] andsome technical details are in the appendix .scher and montroll s `` valley model '' proposes a conditional ptd as the starting distribution .this conditional density accounts for the probability that a given carrier is trapped during a time interval within a potential well of depth .after this time interval has elapsed the carrier jumps to another potential valley .it is next assumed that the energy is a random variable described by a density .we thus have a `` superstatistics '' with the unconditional pausing - time density given by the conditional ptd is assumed to be the simple exponential ( poisson ) form .\label{rown : psicond}\ ] ] this choice is quite reasonable since for a given the emerging statistics is homogeneous because all occurrences have the same origin and in consequence they enjoy an identical characteristic time scale .scher and montroll also assume that the relationship between the random energy and the characteristic time of the distribution is given by the simple exponential form : where and , a fundamental constant of the model , is measured in units of energy .we should note that in scher - montroll s approach is the thermal energy of the environment at temperature ( is the boltzmann constant ) .we remark at this point that the valley model is consistent with the most basic properties of a queuing process recently addressed by vazquez et al .indeed , in that process a set of incoming messages , or tasks , arrives at random . to these messagesa certain priority labeled by is attached .the execution time of a given task with priority is described by the conditional density . in the most generalsetting is also a random variable characterized by a density .we are thus faced again with the `` superstatistics '' mentioned above since the timing of the outgoing tasks is governed by the unconditional ptd given by eq .( [ rown : supstat ] ) .in the simplest case of a `` first - in - first - out '' queue the priority has the same value for all tasks , hence and the unconditional ptd reads , \label{poissonian}\ ] ] which is a poissonian density with a single characteristic time scale ( the mean time between consecutive outgoing events ) . in terms of decision theory , the situation is comparable to that of having no priority protocol at all .another particular situation would be to assign priorities in a uniform random manner with where possible values of are constrained inside the interval ] .note that all -moments considered above obey the normalization condition , i.e. , they are equal to for .the dissipative and fluctuating scales and merge into a single scale when .this equality means that dissipation and fluctuation are linked by where . for ( the gaussian case )this relation reads which is the analog of the usual fluctuation - dissipation relation .this leads us to look at eq .( [ f - d2 ] ) as the fractional version of the fluctuation - dissipation theorem suitable to the present approach .let us finally observe that when the fractional fluctuation - dissipation relation holds the monofractal and multifractal parts of the -moment are both governed by the same scale , that is shall now confront our analytical model with empirical data .we focus on moments and multifractality and leave for a future presentation extensive testing of the ptd s obtained in sect .[ sec2 ] and their comparison with previous studies .we have decided to apply our approach to financial markets because finance is one of the fields where large amounts of data are easily available .in particular we collect tick - by - tick data of futures contracts on several indices and also on a single stock ( see table [ table : data ] ) .the assets chosen have a very diverse nature thus providing wide generality to our analysis ..empirical data specifications of the tick by tick intertransaction data used .these are futures contracts on german index ( dax ) , on the dow jones american index ( dji ) , on the polish index ( wig20 ) and on the foreign exchange us dollar - deutsche mark ( usdm ) and us dollar - euro ( eurus ) .we also add a single stock : telefonica ( tef ) . [ cols="<,^,^,^",options="header " , ] [ table : tab0 ] if we want , however , to have all empirical facts in the nutshell of a single formula we should generalize eq .( [ mf - time_2 ] ) so as to include the monofractal behavior when becomes large .the requirements that such a heuristic multifractal ( hmf ) formula has to satisfy are : ( i ) it must obey the normalization condition ( i.e. , for it it should be equal to ) ; ( ii ) for small values of it must reproduce eq .( [ mf - time_2 ] ) ; while ( iii ) for larger values of the hmf formula must tend to a monofractal form .the heuristic formula we propose is : where |q| . \label{exponent}\ ] ] note that we have added a fourth parameter , , which modifies the scale by a new one ( cf . with eq .( [ rown : tqfinalnew ] ) ) . equation ( [ rown : tqfheu ] ) obviously satisfies the normalization condition .moreover , for small we have and we recover eq .( [ mf - time_2 ] ) . also for and eq .( [ rown : tqfheu ] ) tends to the monofractal form : figure [ figure : tqheuristic ] shows ( solid curves ) how the hmf formula fits the dax and telefonica empirical data on the whole range of values of .this is additionally confirmed for the shorter range by the zoom provided by the inset graph . for predictions of mf and hmf formulas can not be distinguished .the value of parameters of the hmf model obtained by the fit is given for comparison in table [ table : tab0 ] .the predictions of formula ( [ rown : tqfheu ] ) have been tested on the available data sets with satisfactory results by plotting the function . \label{test3}\ ] ] versus .if the hmf hold , we then would be able to see a merging of all data sets along the curve where ( cf .( [ rown : tqfheu ] ) and ( [ test3 ] ) ) .the model also allows for studying explicitly the dependence on of transformed -moment .\label{transf}\ ] ] for doing this we escale the values in each market as where and are these estimated parameters from market as shown in table [ table : tab0 ] .the unlabeled parameters and concerns a reference market which again corresponds to the dji futures .if the model hold , there would be a linear dependence between the transformed -moments ( [ transf ] ) and across the different markets since the hmf model ( [ exponent ] ) is invariant across the markets through and where would be its slope. figures [ fig : test3 ] and [ fig : test3b ] show the satisfactory results that supports the validity of the heuristic multifractal formula .defined in eq .( [ test3 ] ) as a function of the order from six empirical financial data sets using parameters of table [ table : tab0 ] .we finally observe the merging of all data sets for along a curve of the form where . ]-moment given by eq .( [ transf ] ) as a function of when ( bottom line ) , and ( line above ) and for six empirical financial data .solid lines verify linear dependence across different markets . ] of the telefonica ( tef ) stock .solid line provides the numerical computation of for the mf model as given by eq .( [ psi ] ) with parameters , , and . dashed and dottedlines are respectively the fits with the q - exponential ( with and q ) and the weibull probabilities ( with and ) . ]we finally mention that the problem of the explicit forms of conditional ptd and distribution which gives the heuristic formula ( [ rown : tqfheu ] ) according to relation ( [ tq ] ) is still a challenge .we can otherwise check the soundness and self - consistency of our multifractal approach by looking at its sojourn probability ( i.e , the decumulative probability ) and compare it with empirical data .we take the sojourn probability instead of the pausing time distribution because verification with empirical data is firmer .recall that the mf model takes the poisson density provided by eq .( [ rown : psicond ] ) while obeys a stretched exponential as given in eq .( [ rown : rhoeps ] ) . substituting these densities into eq .( [ sp ] ) yields the expression needs to be numerically evaluated and for doing this we have taken the parameters of telefonica given in table [ table : fullfitb ] and slightly modify them to improve the fit with empirical data . solid line in fig .[ figure : sojourn ] shows the resulting curve and it is there compared with the empirical sojourn probability of telefonica . the empirical analysis on the pausing time density and the sojourn probability in financial data has been extensively studied during the last few years .some recent papers argue that can be described properly by the tsallis q - exponential ^{1/(q-1)}}\ ] ] with q or the weibull distribution these candidates are also represented in fig .[ figure : sojourn ] and the quality of their fits are comparable to that of our mf model . a more accurate study among the differences and similarities of the mf model ( and eventually the hmf ) between both the theoretical and empirical distributions is certainly necessary .however , we leave a more complete study of the sojourn probability for a future work .in this work we have extended the original ctrw formalism , within the frame of scher - montroll s valley model , to furnish an analytical treatment for the statistics of interevent times .the model developed has been tested to financial time series , although the analysis is applicable to the broader area of interhuman communications .the approach presented consists in obtaining the ptd and the -moments of the interevent time intervals through a random variable described by a probability density .the nature of this hidden variable depends on the problem at hand . in the original work of scher and montroll represented the depth of the potential well where carriers were trapped .in other contexts , such as queuing processes , may represent the priority assigned to an incoming task and for financial markets we are exploring the possibility that would be related with transaction volumes , market depth or bid - ask spread .whatever the case , the overall approach assumes an expression for the conditional ptd governing the timing of incoming events ( charged carriers , messages , news , etc . ) . if these incoming events are supposed to arrive at random the natural choice for the conditional ptd is the poisson distribution as given in eq .( [ rown : psicond ] ) .a second assumption is that for a given the mean time between consecutive events , , depends on the hidden variable through the simple exponential form expressed by eq .( [ rown : taueps ] ) . finally , in terms of the probability distribution of the unconditional ptd and the -moments ( which both refer to executed tasks or outcoming events ) are respectively given by eqs .( [ rown : supstat ] ) and ( [ tq ] ) . with these simple ingredients we have been able to obtain long - tailed ptd s and multifractal -moments .thus , for instance , for a laplace density we have which agrees with many empirical observations of diverse phenomena from queuing theory to finance . regarding moments the choice of a stretched exponential as the probability density for leads to a multifractal behavior of the form where is a conveniently chosen scale .we have tested the multifractal behavior of intertransaction times on large financial sets of tick - by - tick data ( see for multifractal analyses in other financial settings ) .the overall conclusion is that -moments are multifractal for small values of ( ) , while for larger orders becomes monofractal .a more refined but heuristic analytical formula has also been proposed which fits the whole range of empirical -moments . nevertheless , the problem of the explicit forms for both the conditional ptd and the density resulting in the heuristic expression is still a challenge .let us finish by noting that in some places around the paper we have highlighted some thermodynamic similarities in our method .in fact the multifractal approach we have herein developed is feasible of a thermodynamic interpretation .we will develop this analogy in a future work .jp and jm acknowledge partial financial support from direccin general de investigacin under contract no .fis2006 - 05204 .we want to evaluate the integral ( [ iq2 ] ) which we write in the form where . for large we can employ the saddle - point approximation or laplace s method .expanding $ ] and performing the resulting gaussian integral we obtain ^{1/2}e^{-[\lambda h(x_0)+{\rm o}(\lambda^{-1/2 } ) ] } , \label{a3}\ ] ] where is the minimum of .that is , is the solution to i.e. , but and we rewrite eq .( [ a4 ] ) as since the right hand side of this equation is positive ( recall that then necessarily .hence and the two extremes of are when , and when . on the other hand ,recalling that has the same sign as we can write in the form and using eq .( [ a5 ] ) we have collecting terms we finally obtain where ^{1/2 } \left(\frac{|q|}{\alpha}\right)^{(2-\alpha)/2(\alpha-1 ) } , \label{a9}\ ] ] and ( see eq .( [ lambda ] ) ) 99 a .-barabsi , nature ( london ) * 207 * , 435 ( 2005 ) .k. yamasaki , l. muchnik , s. havlin , a. bunde , and h. e. stanley , proc .usa * 102 * , 9424 ( 2005 ) .a. vzquez , b. rcz , a. lukcs , and a - l barabsi , phys .lett . * 98 * , 158702 ( 2007 ) .t. nakamura , k. kiyono , k. yoshiuchi , r. nakahara , z.r .struzik , and y. yamamoto , phys .lett . * 99 * , 138103 ( 2007 ) .a. vzquez , j. g. oliveira , z. dezs , k .-goh , i. kondor , and a .-barabsi , phys .e * 73 * , 036127 ( 2006 ) .j. perell , m. montero , l. palatella , i. simonsen , and j. masoliver , j. stat .p11011 ( 2006 ) .a. vzquez , phys .lett . * 95 * , 248701 ( 2005 ) .p. k. clark , econometrica * 41 * , 135 ( 1973 ) . c. doering and p. constantin , phys .e * 49 * , 4087 ( 1994 ) . c. tsallis , j. stat .phys . * 52 * , 479 ( 1988 ) .montroll and g.h .weiss , j. math .* 6 * , 167 ( 1965 ) .aspects and applications of the random walk _( north - holland , amsterdam , 1994 ) .g. pfister and h. scher , adv .* 27 * , 747 ( 1978 ) .h. scher and e.w .montroll , phys .b * 12 * , 2455 ( 1975 ) .kehr , r. kutner , and k. binder , phys .b * 23 * , 4931 ( 1981 ) .m. abramowitz and i. stegun _ handbook of mathematical functions _( dover , new york , 1965 ) .r. kutner , chem .284 , 481 ( 2002 ) .m. kozowska and r. kutner , physica a 357 282 ( 2005 ) .a. bunde and s. havlin ( eds . ) _ fractals and disordered systems _ ( springer , new york , 1996 ) ._ asymptotic expansions _ ( dover , new york , 1956 ) .f. mainardi , m. raberto , r. gorenflo , and e. scalas , physica a * 287 * , 468 ( 2000 ) .l. sabatelli , s. keating , j. dudley , and p. richmond , eur .b * 27 * , 273 ( 2002 ) .j. masoliver , m. montero and g.h .weiss , phys .e * 67 * , 021112 ( 2003 ) .r. kutner and f. switala , quant .finance * 3 * , 201 ( 2003 ) .p. repetowicz and p. richmond , physica a * 343 * 677 ( 2004 ) .e. scalas , r. gorenflo and f. mainardi , phys .e * 69 * , 011107 ( 2004 ) .j. masoliver , m. montero and j. perell , and g.h .weiss , j. econ .behav . organ . * 61 * , 577 ( 2006 ) .m. politi , e. scalas , physica a * 366 * , 466 ( 2008 ) .z - q jiang , w. chen , w - x zhou , arxiv:0804.2431v2 .p. gopikrishnan , v. plerou , x. gabaix , and h. e. stanley , phys .e * 62 * , r4493 ( 2000 ) .f. lillo , j.d .farmer , and r.n .mantegna , nature * 421 * , 129 ( 2003 ) .l. gillemot , j. d. farmer , and f. lillo , quant .finance * 6 * , 371 ( 2007 ) .e. bacry , j. delour , and j.f .muzy , phys .e * 64 * , 026103 ( 2001 ) .l. calvet and a. fisher , rev .. stat . * 84 * , 381 ( 2002 ) .t. di matteo , quant . finance , * 7 * , 21 ( 2007 ) . c. beck and f. schlogl _ thermodynamics of chaotic systems _ ( cambrigde university press , cambridge , 1995 ) .
social , technological and economic time series are divided by events which are usually assumed to be random albeit with some hierarchical structure . it is well known that the interevent statistics observed in these contexts differs from the poissonian profile by being long - tailed distributed with resting and active periods interwoven . understanding mechanisms generating consistent statistics have therefore become a central issue . the approach we present is taken from the continuous time random walk formalism and represents an analytical alternative to models of non - trivial priority that have been recently proposed . our analysis also goes one step further by looking at the multifractal structure of the interevent times of human decisions . we here analyze the inter - transaction time intervals of several financial markets . we observe that empirical data describes a subtle multifractal behavior . our model explains this structure by taking the pausing - time density in the form of a superstatistics where the integral kernel quantifies the heterogeneous nature of the executed tasks . an stretched exponential kernel provides a multifractal profile valid for a certain limited range . a suggested heuristic analytical profile is capable of covering a broader region .
as ccn offers receiver - driven mode of communication , the distributed applications running on it needs to be modified from their usual sender - driven paradigm .checkpointing and rollback - recovery are well known techniques that allow processes to make progress in spite of failures .however , ccn is devoid of any such mechanism of failure recovery . keeping the above points in mind, we bring ccncheck which offers a sender - optimized way of running distributed applications .ccncheck also implements checkpointing for applications running on ccn .a typical distributed application in ccn is assumed to be running on multiple nodes and uses a common channel to send / receive interests and data .we further assume that : 1 .processes do not have any common clock/ memory .processes follow a fail - stop model of failing i.e. processes can crash by stopping execution and remain halted until restarted . checkpoint is saved local state of a process . set of local states and messages in common channel is global state of a system . in checkpointing, every process takes a local checkpoint to ensure a global consistent state which can later be used to recover a system from failure . + our work is centered around using interests as notifications / signals in ccn .formally , a distributed system running on ccncheck works on following model : 1 .every node knows about other nodes running the same distributed application .every node is defined by a unique name which is pre - appended by the application name the process is running .the interest packet is not stored in the router s cache .the router only forwards the interest using its fib entry . to run a sender - driven distributed application in ccn we use an approach very similar to solving hidden terminal problem in ieee 802.11 networks .the sender first issues a * request - to - send ( rts ) * interest to the desired destination process .this rts packet acts as a notification to destination about an incoming data .the name of rts contains the identifying name of its issuer using which the destination process issues a * clear - to - send ( cts ) * interest back to the sender .cts serves as the necessary interest required to send the data in ccn .figure [ fig:1 ] depicts process a sending some data to process b using ccncheck .ccncheck communication handler , scaledwidth=30.0% ] distributed multi - threaded checkpoint ( dmtcp ) is a research - based , transparent , user - level checkpointing tool for distributed applications .dmtcp follows a blocking type algorithm of checkpointing to ensure a global consistent state at each checkpoint .it employs a stateless centralized coordinator to coordinate checkpoint requests between nodes .+ we have developed a plugin for dmtcp which enables it to work in ccn environment . even though ccn is deployed as an overlay on tcp / ip networks for which dmtcp works well , however , some more logical changes are necessary to make dmtcp function in ccn. we also formalize various inconsistent checkpoint scenarios due to uncoordinated checkpointing in ccn and devise a method to overcome such situations .some of the changes made are : 1 .dmtcp uses flush token to clear out tcp sockets during checkpoint process to ensure consistent checkpoint . as ccn works atop of interests and data packets ,we have designed a `` flush interest '' which ensures that checkpoint is consistent from any orphan interests and data .dmtcp coordinator is modified to detect a ccn network and register itself with ccn daemon on invoking .the coordinator is run as a stateless process with a name unique to the environment / organization .we have designed interest packets which is used by coordinator to checkpoint processes in the application .the restart from checkpoint process is able to resolve any non - responded interests due to lack of pending interest table ( pit ) entries .the discovery services in reconnect phase on restart from a checkpoint works using ccn namespaces .* system model * + ccncheck uses three layer abstraction model . 1 .* communication handler : * it handles the interest and data packets to be sent between communicating nodes .it is built on ccnx v0.8.2 .* checkpoint handler : * it provides the checkpoint mechanism in ccn and is based on dmtcp .* end - user applications : * these are applications to be run in a distributed environment. it can be in c / c++ language .* interest naming rules * + the naming format for rts and cts packets in ccnx are as follows : + ccnx://application name / receiver address / type of interest / sender address + the _ request - to - send _ and _ clear - to - send _ interests use signal name rts and cts respectively .the checkpoint interest , however , is only one - way notification ( i.e. from coordinator to process ) .thus , interest name does not have the sender s name appended to it and it is denoted by signal type check. similarly , _flush interest _ has a signal name flush but is appended with the last name of last interest sent figure [ fig:2 ] shows naming rules for ccncheck .* applications * + ccncheck was deployed on a test - bed of six interconnected nodes in ccn network .we have developed two sample distributed applications to review our system .we have also used an existing application to check the compatibility of our system . 1 .a simple c application which keeps counting till infinity is run locally on each node with different start times and is killed later .the goal was to check the consistency of checkpoint taken by ccncheck before failure .2 . a distributed c++ application in which the participating nodes compute the consecutive numbers of fibonacci sequence in an iterative manner .this application utilizes the distributed capabilities of ccncheck to send the result to the next node after each subsequent computation .3 . a ccn enabled vlc player which can stream videos on a content centric network .+ we are able to checkpoint all the applications listed above .
we consider the problem of checkpointing a distributed application efficiently in content centric networks so that it can withstand transient failures . we present ccncheck , a system which enables a sender optimized way of checkpointing distributed applications in ccn s and provides an efficient mechanism for failure recovery in such applications . ccncheck s checkpointing mechanism is a fork of dmtcp repository ccncheck is capable of running any distributed application written in c / c++ language .
ground - based laser interferometer detectors such as or virgo are expected to detect -signals in data that have been , or will soon be , collected .the most promising and well - understood astrophysical sources of gravitational waves are in close orbits , which consist of two compact objects such as primordial black holes , neutron stars and/or stellar - mass black holes .potentiality of a detection verges towards one event per year .however , the detection rate strongly depends on the coalescence rate and the volume of universe that detectors can probe .while we can not influence the coalescence rates , we can increase the volume or distance at which a signal can be detected , which highly depends on ( i ) the design of the detectors and their sensitivities , and ( ii ) on the detection technique that is used .detector sensitivity can be increased most certainly ; but once data have been recorded , only the deployment of an optimal method of detection can ensure the highest detection probability , and that is a passport , not only to probe the largest volume of universe possible , but also to detect a gw - signal directly for the first time . fortunately enough , altough the two body problem can not be solved exactly in general relativity , post - newtonian ( hereafter pn ) approximation have been used to obtain accurate _models _ of the late - time dynamics of .therefore , we can deploy a matched filtering technique , which is an optimal method of detection when the signal buried in gaussian and stationary noise is known exactly . the models that we used for detection are also called _template families_. the shape of the incoming -signals depends on various parameters , which are not known _ a priori _ ( e.g. , the masses of the two component stars in the case of a search for non - spinning binaries ) .thus , we have no choice but to filter the data through a set of templates , which is also called a _ template bank _ and must cover the parameter space that is astrophysically relevant . since we can not filter the data through an infinitely large number of templates the bank is essentially discrete .consequently , the mismatch between any signal and the nearest template in the discrete template bank will cause reduction in the . spacing between templates must be chosen so as to render acceptable this snr reduction as well as the computational demand required by the cross correlation of the data with the entire discrete template bank .as we shall see , the spacing between templates is set by specifying a _ minimal match _ between any signal and the template bank . in practice ,template families are approximation of the true gravitational wave signal , and no true signal will perfectly match any of the template families .however , in this paper we shall consider that template and simulated signal belong to the same template family .the template bank placement is one of the key aspects of the detection process .nonetheless , its design is not unique .there are essentially two types of template bank placements .the first one does not assume any knowledge on the signal manifold ; the second does .the first type of placement computes matches between surrounding templates until two templates have a match close to the requested minimal match , and computes matches repeatedly over the entire parameter space until it is fully populated . using geometrical considerations ,an efficient instance of this technique has been developed .a second approach , described in various papers , utilizes a metric that is defined on the signal manifold .it uses local flatness theorem to place templates at proper distances over the parameter space .we developed a template bank placement in that was implemented and fully tested within the ligo algorithm library .this template bank was used in the analysis of data from different ligo science runs .we also shown that although robust with respect to the requirement ( matches should be above the minimal match ) , it is over - efficient .this result was expected because we used a square lattice to place templates over the parameter space . in this paper , we fully describe and validate a hexagonal template bank placement that is currently used by the scientific collaboration so as to analyze the most recent science runs . in section [ sec : formalism ] , we recapitulate some fundamental techniques and notions that are needed to describe the bank placement , and previous results on the square template bank placement . we also provide a framework to validate a template bank . in section [ sec :algorithm ] , we describe the algorithm that places templates on a hexagonal lattice . in section [ sec : simulation ] , we summarize the outcome of the simulations performed to test the hexagonal bank . we envisage various parameter spaces that allows to search for , , and , or signals .we also considered design sensitivity curves for the current and advanced generation of ground - based detectors . in section [ sec : simspa ] , we show that the proposed hexagonal template bank has the required specifications . finally ,in addition to the case of a template family based on the stationary phase approximation , we also investigate in section [ sec : simother ] the possibility to use the same hexagonal bank placement with other template families including pad resummation and effective one - body approximation .we show that there is no need to construct specific template bank for each template family : the proposed bank can be used for the different families that we looked at in this paper .matched filtering and template bank placement use formalisms that are summarized in this section .we also review the main results of the square placement , and recapitulate the framework introduced in that allows us to validate a template bank .the matched filtering technique is an optimal method to detect a known signal , , that is buried in a stationary and gaussian noise , .the method performs a correlation of the data with a template . in this paper, we shall assume that and are generated with the same model so that a template can be an exact copy of the signal .matched filtering of the data with a template can be expressed via the inner product weighted by the noise , , and is given by note that for simplicity , we will ignore the time within the inner product expressions . a template anda signal can be normalized according to the after filtering by is the simulations that we will perform assume that template and signal are normalized , that is , and . in this paper , we are interested in the fraction of the optimal snr obtained by filtering the signal with a set of template , therefore , we can ignore the noise , and becomes . strictly speaking , does not refer to a snr anymore , but to the ambiguity function , which is by definition always less than or equal to unity if the two waveforms are normalized . in the following ,we shall use the notion of _ match _ introduced in ; the match between two templates is the inner product between two templates that is maximized over the time ( using the inverse fourier transform ) and the initial orbital phase ( using a quadrature matched filtering ) .the incoming signal has unknown parameters and one needs to filter the data through a set of templates , i.e. , a template bank .the templates are characterized by a set of parameters .the templates in the bank are copies of the signal corresponding to a set of values , where is the total number of templates .a template bank is optimally designed if is minimal and if for any signal there always exists at least one template in the bank such that where is the minimal match mentioned earlier .usually , in searches for , the value of the minimal match is set by the user to 95% or 97% , which corresponds to a decrease in detection rate of 15% and 9% , respectively .nevertheless , the minimal match may have a much smaller value for the first stage of a hierarchical search ( e.g. , 80% ) , or for a one - stage search of periodic signals ( e.g. , 70% or lower ) .the distance between two infinitesimally separated normalized templates on the signal manifold is given by where is the partial derivative of the signal with respect to the parameter .so , the quadratic form defines the metric induced on the signal manifold .the metric is used to place templates at equal distance in the parameter space .the distance between templates in each dimension is given by in practice , using such leaves a fraction of the parameter space uncovered , and overlap between templates is required ( e.g. , in the square placement , spacing is actually set to ) . since we restrict ourself to the case of non - spinning waveforms , depends on 4 parameters only : the two component masses , and which may vary from sub - solar mass to tens of solar mass systems , the initial orbital phase , and the time of coalescence .we can maximized over and analytically , therefore the parameter space that we need to cover with our template bank is a 2-dimensional space only . for conciseness , we can represent the gw - waveform with a simplified expression given by ^{2/3 } \cos [ \varphi(t ) + \varphi_c ] , \label{eq : waveform1}\ ] ] where is the ( invariant ) instantaneous frequency of the signal measured by a remote observer , the phase of the signal is defined so that it is zero when the binary coalesces at time , and is a numerical constant representing the amplitude . the asymmetric mass ratio is , where is the total mass of the system .there exist amplitude corrections up to 2.5pn , the importance of which for detection and estimation is shown in .however , in this work , we use restricted post - newtonian models only and limit pn - expansion of the phase to 2pn order . moreover , in the template bank placement , namely for the metric computation , we consider the , for which the metric can be derived analytically .nevertheless , other template families can be used both for injection and filtering ( see section [ sec : models ] ) .the placement that we proposed in uses the metric based on the model , and the spacing , as defined in eq .( [ eq : step ] ) . since the model explicitly depends on the two mass parameters and , then the spacing are function of these two quantities as well .however , the metric expressed in these two coordinates is not a constant ; it is not a constant either if we were to use the component masses , and .the preference of chirptimes , denoted and ( see appendix [ annex : tools ] , eqs .[ eq : t0t3 ] ) as coordinates on the signal manifold is indeed more practical because these variables are almost cartesian .although not perfectly constant for pn - order larger than 1pn , we shall assume that the metric is essentially constant in the local vicinity of every point on the manifold .we could use any combinations of chirptimes , but using the pair , there exists analytical inversion with the pair ( see appendix [ eq : meta ] ) .the parameter space to be covered is defined by the minimum and maximum component masses of the systems considered ( and ) , and possibly the minimum and maximum total mass ( and ) as shown in fig .[ fig : space ] . the lower cut - off frequency , at which the template starts in frequency , sets the length of the templates and therefore directly influences the metric components , the parameter space , and the number of templates . in , we showed how the size of the template bank changes with .we also investigated the loss of match due to the choice of .we generally set so that the loss of match is of the order of a percent .+ two instances of template bank placements . in the two plots ,we focus on a small area of the parameter space presented in figure [ fig : space ] .we used a square ( top panel ) and hexagonal ( bottom panel ) placement . for convenience ,we re - scale the metric components so that , .each template position is represented by a small circle . around each template position, we plot an ellipse that represents iso - match contour of .each ellipse contains an inscribed square or hexagon which emphasizes how ellipses overlap each other .we can see that squares ( top ) slightly overlap each other .this is because templates are layed along equal constant line and not along the eigen - vector directions , which change over the parameter space . in the hexagonal placement , we take care of this problem short - coming , and therefore hexagons are perfectly adjacent to each other : the placement is optimal.,title="fig:",scaledwidth=45.0% ] we briefly remind how the proposed square template bank works .first , templates are placed along the or line starting from the minimum to the maximum mass .then , additional templates are placed so as to cover the remaining part of the parameter space , in rows , starting at along lines of constant until a template lies outside the parameter space .the spacing between lines is set adequately .distances between templates are based on a square lattice .an example of such a placement is shown in fig .[ fig : squarevshexa ] .one of the limitations of the placement is that templates are not placed along the eigenvectors of the metric but along the standard basis vectors that describe the space .this approximation make the ellipses slightly more overlapping than requested and may also create holes when the orientation of the ellipses varies significantly ( i.e. , at high mass regime ) .the square placement is also over - efficient as compared to a hexagonal placement ( see fig .[ fig : square ] ) . independently of the template bank placement , the template bankmust be validated to check whether it fulfills the requirements ( e.g. , from eq .[ eq : mm ] ) .first , we perform monte - carlo simulations so as to compute the _ efficiency _ vector , , given by where is the number of templates in the bank , the number of injections . the vectors and correspond to the parameters of the simulated signals and the templates , and are the models used in the generation of the signal and template , respectively . in all the simulations , we set .furthermore , we can analytically maximize over the unknown orbital phase and , therefore , . the efficiency vector and the signal parameter vector are useful to derive several figures of merit .the cumulative distribution of ( fig .[ fig : square ] , bottom panel ) indicates how quickly matches drop as the minimal match is reached .nevertheless , the cumulative distribution function of hides the dependency of the matches upon masses .therefore , we also need to look at the distribution of versus total mass ( e.g. , fig . [ fig : square ] , top panel ) , or versus , or chirp mass , ( see appendix for an exact definition ) .usually , we look at only .indeed , in most cases , the dynamical range of is small ( from 0.1875 to 0.25 in the case ) .finally , we can quantify the efficiency of a template bank with a unique value , that is the _ safeness _ , , given by ideally , we should have a template bank such that . is a generalization of the left hand side of eq .[ eq : mm ] on injections .the higher is , the more confident we are with the value of the safeness .ideally , the number should be several times the size of the template bank that is , so that statistically we have at least one injection per template .the sub - index of the safeness is the ratio between and and indicates the relevance of the simulations .the safeness provides also a way of characterizing the template bank : if is less than the expected minimal match , then the bank is _ under - efficient_. conversely , a template bank can be over - efficient like in fig .[ fig : square ] .+ efficiencies of the square template bank . for conveniencewe remind the reader of some results of the square template bank provided in . in the simulations , we used stationary phase approximant models for both injections and templates .injections consist of binary neutron stars .we used 4 design sensitivity curves ( ligoi , advanced ligo , virgo and geo ) , and for each of them we performed 10,000 injections . in the top panel , we show all the results together : all injections are recovered with a match higher than 95% , as requested . in the bottom panel ,we decomposed the 4 simulations and show that all of them behave similarly .actually , we can see that most of the injections are recovered with even higher matches ( above 97% ) showing the over - efficiency of the placement.,title="fig:",scaledwidth=40.0%,scaledwidth=35.0% ]in the basis vectors , both amplitude and orientation of the eigenvectors change , which may imply a laborious placement . in this section , we describe the hexagonal placement that is conceptually different from the square placement and takes into account the eigenvectors change throughout the parameter space .+ although the hexagonal placement algorithm is independent of any genetic or evolutionary algorithms , it can be compared to biological process , and we will use this analogy to explain the placement .first , let us introduce a _ cell _ that contain a template position ( e.g. , ) , the metric components defined at this position , and a unique identification number that we refer to as an i d .a cell covers an area defined by an ellipse with semi - axis equal to .the goal of a cell is to populate the parameter space with an offspring of at most 6 cells ( hexagonal placement ) .a cell can be characterized by the following principles : 1-initialization : : a cell is created at a given position in plane , not necessarily at a physical place ( i.e , can be less than 1/4 ) .the initialization requires that + * metric components at are calculated , * a unique i d number is assigned , * 6 connectors are created and set to zero .+ finally , if the cell area intersects with the parameter space , then it has the ability to survive in its environment : it is _conversely , a cell whose coverage is entirely outside the parameter space is _sterile_. 2-reproduction : : a fertile cell can reproduce into 6 positions that are the corner of a hexagon inscribed in the ellipse whose semi - axes are derived from the metric components s .a cell that has reproduced is a _ mother cell _ and its offspring is composed of 6 _ daughter cells_. once a daughter cell is initialized , it can not reproduce in place of its mother .this is taken into account via the connection principle .3-connection : : following the reproduction process , a mother cell sets the connections with its daughter cells by sharing their ids .therefore , a mother cell knows the ids of its daughter cells and vice - versa .moreover , when a mother cell reproduces , it also sets up the connections between two adjacent daughters so that they both know their ids .these connections prevent cells to reproduce in a direction that is already populated .4-sterility : : a cell becomes sterile ( can not reproduce anymore ) when both reproduction and connection principles have been applied . a cell that is outside the parameter spaceis also sterile ( checked during the initialization ) .5-exclusivity : : the reproduction process is _ exclusive _ : only one cell at a time can reproduce .it is exclusive because a cell can not start to reproduce while another cell is still reproducing .the cell population evolves by the reproduction of their individuals over as many _ generations _ as needed to cover the entire parameter space . the first generation is composed of one cell only .the position of this first cell corresponds to .we could start at any place in the parameter space .however , local flatness is an approximation and the author thinks it is better not to start at where the metric evolves quicker ( highest mass ) .the first cell is initialized ( first principle ) .then , the cell reproduces into 6 directions ( second principle ) .once the reproduction is over , the connectors between the mother cell and its daughters are set ( third principle ) , and finally , the cell becomes sterile ( fourth principle ) .this loop over the first cell has created a new generation of 6 cells , and each cell will now follow the four principles again . however , the new generation of cells will not be able to reproduce in 6 directions .indeed , connectors between the first mother cell and its daughters have been set , and therefore the new cell generation can not propagate towards the mother direction . furthermore , the 6 new cells have already 2 other adjacent cells .therefore , each cell of the second generation can reproduce 3 times only . moreover , some of the cells might be outside the parameter space and are sterile by definition .once a new generation has been created , the previous generation must contain sterile cells only .the algorithm loops over the new generation while there exists fertile cells .the first generation is a particular case since it contains only one cell .however , the following generations are not necessarily made of a unique cell , and the reproduction warrants a careful procedure : the reproduction takes place cell after cell starting from the smallest i d .moreover , in agreement with the fifth principle , the cells of the newest generation wait until all the cells of the previous generation have reproduced .the reproduction over generations stops once no more fertile cells are present within the population .since the parameter space is finite , the reproduction will automatically stop .figure [ fig : hexalgo ] illustrates how the first 3 generations populate the parameter space .once the reproduction is over , some cells might be outside the physical parameter space , or outside the mass range requested .an optional final step consists inpushing back " the corresponding cells inside the parameter space .first , we can push back the non - physical cells only , that is the cells that are below the line towards the relevant eigen - vector directions onto the line .second , there are other cells for which mass parameters correspond to physical masses but that are outside the parameter space of interest .nothing prevents us from pushing these cells back into the parameter space as well .this procedure is especially important in regions where the masses of the component objects are large . indeed , keeping templates of mass larger than a certain value causes problems owing to the fact that the search pipeline uses a fixed lower cut - off frequency and the waveforms of mass greater than this valuecan not be generated . in the simulations presented in this paper , we move the cells that are below the boundary , and keep the cells that are outside the parameter space but with .useful equations that characterize the boundaries of the parameter space are provided in appendix [ annex : tools ] .a flow chart of the algorithm is also presented in appendix [ annex : algorithm ] .an example of the proposed hexagonal placement is shown in fig .[ fig : space ] . in this example , the minimum and maximum individual mass component are and , and the lower cut - off frequency is of hz .we can see that none of the templates are placed below the equal mass line whereas some are placed outside the parameter space .figure [ fig : squarevshexa ] gives another placement example .the ratio of a circle s surface to the area of a square inscribed within this circle is , where is the circle s radius .the ratio of the same circle s surface to an inscribed hexagon equals .the ratio of the square surface to the hexagon surface is therefore about 29% , which means that about 29% less templates are needed to cover a given surface when a hexagonal lattice is used instead of a square lattice ; computational cost could be reduced by the same amount .tables [ tab : squaresize ] and [ tab : hexasize ] summarize the sizes of the proposed square and hexagonal template bank placements .the hexagonal template bank reduces the number of templates by about 40% ( see table .[ tab : gain ] ) .this gain is larger than the expected 29% , and is related to the fact that we take into account the evolution of the metric ( orientation of cells / ellipses ) on the parameter space .computational time required to generate a hexagonal bank appears to be smaller than the square bank . in table[ tab : compcost ] , we record the approximate time needed to generate each template bank , which is of the order of a few seconds even for template banks as large as 100,000 templates . it is also interesting to note that most of the computational time is spent in the computation of the moments ( used by the metric space ) rather than in the placement algorithm .the template bank size depends on various parameters such as the minimal match and lower cut - off frequency that strongly influence the template bank size .other parameters such as the final frequency at which moments are computed , or the sampling frequency may also influence the bank size .there are also refinements that can be made on the placement itself .two main issues arise from our study .first , the hexagonal placement populates the entire parameter space . yet, parameter space is not a square but rather a triangular shape .in the corner of the parameter space , a hexagonal placement is not needed anymore : a single template overlaps with two boundary lines . in this case , hexagonal placement can be switched to a bisection placement that places templates at equal distances from the two boundary lines .a secondary issue is that the hexagonal placement is aligned along an eigenvector direction .nothing prevents us to place templates along the other eigenvector direction .it seems that this choice affects neither the efficiencies nor the template bank size significantly ..[tab : square ] typical square template bank size .we summarize the number of templates of typical square template banks .we consider several design sensitivity curves such as ligo , virgo , ( see appendix [ annex : psds ] for analytical expressions and lower cut - off frequencies ) , and 4 typical parameter spaces ( see section [ sec : simulation ] for the mass range . [ cols="^,^,^,^,^,^",options="header " , ]the proposed square and hexagonal template bank placements are used to search for various in the ligo and geo 600 gw - data .they are used to search for primordial black holes , binary neutron stars , binary black holes and a mix of neutron stars and black holes . in the past ,the parameter space was split into sub - spaces that encompass different astrophysical binary systems such as , , , and/or .we can filter the data through a unique template bank that covers the different types of binaries , however , we split the parameter space into the same 4 sub - spaces that have been used to validate the square template bank placement so that we can compare results together .we use the same mass range as in our companion paper , that is pbh binaries with component masses in the range \odot ] , \odot ] and a black hole with component mass in the range \odot ] .we also use the same psd by incorporating the design sensitivities of current detectors ( geo , virgo and ligo - i ) and advanced detectors ( advanced ligo ( or ligo - a ) , and ego ) .each of the psds has a design sensitivity curve , provided in appendix [ annex : psds ] .the lower cut - off frequencies are the same as in and are summarized in the appendix as well . in the case of the ego psd , which we have not used previously ,we set the lower cut - off frequency hz .actually , this value can be decreased to about 10 hz for the case , increasing the template bank sizes . in all the simulations , we tend to use common parameters so as to simplify the interpretation .we use a sampling frequency of 4096 hz over all simulations because the last stable orbit is less than the nyquist frequency of 2048 hz for most of the , , and signals .the computational time is strongly related to the size of the vectors , whose length depends on the time duration of the template / signal used in our simulations . in order to optimize the computational cost , in each search , we extract the longest template duration that we round up to the next power of 2 .the vector duration is then multiply by 2 for safety .we set the minimal match to 95% .we considered 5 types of template families that are described later .we can estimate the number of simulations .for instance , using injections , with 5 different psds , 4 searches ( , , ) , and 5 template families , we have a total of injections , which need to be filtered through templates .if we approximate to be 10,000 and to be 10,000 as well , it is clear that computational cost is huge . in order to speed up the simulations , we chose not to filter signals with all the available templates , but only a relevant fraction of them around the injected signal ; this selection is trivial since template and signal are based on the same model .+ + theoretical calculations using post - newtonian approximation of general relativity give waveforms as expansions in the orbital velocity , where .the pn expansions are known up to order in amplitude and in phase .however , we limit this study to restricted post - newtonian , that is all amplitude corrections are discarded .moreover , we expand the flux only to 2pn order .the energy function and the flux are given by we can obtain the phase starting from the kinematic equations and and the change of binding energy giving a phasing formula of the form . there are different ways in which the above equations can be solved . for convenience ,we introduce labels so as to refer to different physical template families that are used within the gravitational wave community and in our simulations .taylort1 : : if we integrate the equations ( [ inspiralwavephasingformula ] ) numerically , we obtain the so - called taylort1 model . if instead , we use the p - approximant for the energy and flux functions ( * ? ? ?* ; * ? ? ?* ) , then one generates the padet1 model .taylort2 : : we can also expand in a taylor expansion in which case the integrals can be solved analytically to obtain the phase in terms of polynomial expressions as a function of , which corresponds to taylort2 model .this model is not used in this paper but results are very similar to the taylort3 model .taylort3 : : from taylort2 , can be inverted and the polynomial expression of used within the expression for to obtain an explicit time - domain phasing formula in terms of .this corresponds to the taylort3 model .eob : : the non - adiabatic models directly integrate the equations of motion ( as opposed to using the energy balance equation ) and there is no implicit conservation of energy used in the orbital dynamics approach .the eob maps the real two - body conservative dynamics onto an effective one - body problem wherein a test mass moves in an effective background metric .taylorf2 : : the phasing formula is expressed in the fourier domain , and is equivalent to the case already mentioned . first , we validate the hexagonal template bank with a model based on the ( also labelled taylorf2 ) , used to compute the metric components .we set , and compute and .we intensively tested this bank by setting for each psd and each parameter space considered .using the template bank size from table [ tab : hexasize ] , the ratio between template bank size and number of simulations varies from 1.7 to 375 , which is much larger than unity in agreement with discussions that arose in sec .[ sec : be ] .the results are summarized in fig .[ fig : taylorf2_1 ] and [ fig : taylorf2_2 ] . in fig .[ fig : taylorf2_1 ] , we notice that the hexagonal bank is efficient over the entire range of binary , , and searches .moreover , the safeness is close to the minimal match ( ] . in the case, the bank is also efficient for the various psds with total mass between m_\odot ] .the bank is also under - efficient with matches as low as 93% but for very high mass systems above .the match below the minimal match are related to the ligo - i psd only , for which the lower cut - off frequency is 40 hz . for high mass and nearly equal mass systems , the waveforms tend to be very short and contain only a few cycles : the metric is not a good approximation anymore. it also explains the feature seen at high mass , that shows some oscillations in the matches : a single template matches with many different injected signals .one solution to prevent matches to drop below the minimal match is to refine the grid for high mass range by decreasing the distances ( i.e , increasing ) between templates in this part of the parameter space .however , the high mass also correspond to the shortest waveforms which lead to a high rate of triggers in real data analysis .therefore it is advised not to over - populate the high mass region .overall , the hexagonal placement has the same behavior as in but the bank is not over - efficient anymore in most cases .the square and hexagonal template banks are designed for taylorf2 model .yet , models presented in section [ sec : models ] do not differ from each other significantly so long as , which is the case for , waveforms and most of the and waveforms .therefore , we expect the efficiencies of the template banks to be equivalent to the spa - model results .the models used in this section have the same pn - order ( i.e. , 2pn ) as in the taylorf2 model .the simulation parameters are identical except the number of simulations that is restricted to for computational reasons .finally , we tested only the , and searches .the using model being sufficient for a detection search .the taylort1 , taylort3 and padet1 models give very similar results that are summarized in the fig .[ fig : taylort1 ] , [ fig : taylort3 ] and [ fig : padet1 ] .the safeness is greater than the minimal match for the bns and bhns searches , for all three waveforms .more precisely , for bns case , and it is slightly over - efficient for bhns case for total mass above , especially in the case of padet1 model . in the case , the bank is efficient between ~m_\odot ] .we used a model based on stationary phase approximation and showed that the template bank is efficient for most of the parameter space considered .the higher end of the mass range was slightly under efficient in the bbh case but this is partly related to the shortness of the signal and templates considered .the proposed template bank can be used for various template families , not only the stationary phase approximation family .in particular , we tested the taylort1 , taylort3 , padet1 , and eob models at 2pn order , that have been used for simulated injections in the various ligo science runs .it is interesting to see that the proposed template bank is efficient for most of the models considered in this paper .it is also worth noticing that in some cases the template bank is still over - efficient even though the bank size is already reduced by 40% ( e.g. , high mass bhns injections ) .the models that have been investigated in this paper are all based on 2pn order , therefore template families based on higher pn - order should be investigated . in the future , we also plan to consider the case of amplitude corrected waveforms .all simulations presented in this paper use the same model for both the template and signal generation. it would be interested to see how the template bank performs when templates are based on one model ( say , pad ) and the signals are from another ( say , eob ) .this hexagonal template bank is currently used within the ligo project to search for non - spinning inspiralling compact binaries in the fifth science run .this research was supported partly by particle physics and astronomy research council , uk , grant pp / b500731 .the author thanks stas babak for suggested the test of the bank with various template families , and b.s .sathyaprakash and gareth jones for useful comments , discussions , and corrections to this work .this paper has ligo document number ligo - p070073 - 00-z .99 a. abramovici __ , science * 256 * , 325 ( 1992 ) ; b. abbott , _ et al ._ , nuclear inst . and methods in physics research , a * 517/1 - 3 * 154 ( 2004 ) .et al . _ ,quantum grav .* 14 * , 1461 ( 1997 ) ; f. acernese __ , _ the virgo detector , _ prepared for 17th conference on high energy physics ( ifae 2005 ) ( in italian ) , catania , italy , 30 mar-2 apr 2005 , aip conf .proc . * 794 * , 307 - 310 ( 2005 ) . c. cutler , and k.s .thorne , an overview of gravitational wave sources , gr - qc/0204090 , ( 2001 ) .v. kalogera and others , apj , 601 , l179-l182 , ( 2004 ) , erratum - ibid . 614( 2004 ) l137 v. kalogera and c. kim and d. r lorimer and m. burgay and n. damico and a. possenti r. n. manchester and a. g. lyne and b. c. joshi and m. a. mclaughlin and m. kramer and j. m. sarkissian and f. camilo , apj 614 , l137-l138 , ( 2004 ) r. oshaughnessy , and c. kim , v. kalogera , and k. belczynski " , constraining population synthesis models via observations of compact - object binaries and supernovae , astro - ph/0610076 , 2006 . l. blanchet living rev .* 9 * ( 2006 ) 4 .l. blanchet , b.r .iyer , c.m . will , and a.g . wiseman class .* 13 * , 575584 ( 1996 ) f. beauville et al , class .* 22 * , 4285 , ( 2005 ) .b. owen , phys .d**53 * * , 6749 ( 1996 ) .b. owen , b. s. sathyaprakash 1998 phys .d * 60 * 022002 ( 1998 ) .s. babak and r. balasubramanian and d. churches and t. cokelaer and l. blanchet , t. damour and b.r .iyer , phys .d * 51 * , 5360 ( 1995 ) .lsc algorithm library lal , + http://www.lsc-group.phys.uwm.edu/daswg/projects/lal.html b. abbott _ et al ._ , ligo scientific collaboration , phys .d * 69*,122001 ( 2004 ) .b. abbott _ et al ._ , ligo scientific collaboration , phys . rev .d * 72*,082001 ( 2005 ) .b. abbott _ et al ._ , ligo scientific collaboration , phys . rev .d * 72*,082002 ( 2005 ) .b. abbott _ et al ._ , ligo scientific collaboration , phys . rev .d * 73*,062001 ( 2006 ) .b. abbott _ et al ._ , ligo scientific collaboration , gr - qc/0704.3368v2 , ( 2007 ) c. w. helmstrom , statistical theory of signal detection , 2nd edition , pergamon press , london , ( 1968 ) .t. damour , b. r. iyer and b. s. sathyaprakash phys .d * 63 * 044023 ( 2001 ) ; erratum - ibid , d * 72 * 029902 ( 2005 ) . c. van den broeck , a.s sengupta , class .grav . , * 24 * , 155 - 176 ( 2007 ) .sathyaprakash and s.v .dhurandhar , phys .d * 44 * , 3819 ( 1991 ) .dhurandhar and b.s .sathyaprakash , phys . rev .d * 49 * , 1707 ( 1994 ) .t. damour , b. r. iyer and b. s. sathyaprakash phys .d * 63 * 044023 ( 2001 ) ; erratum - ibid , d * 72 * 029902 ( 2005 ) .t. damour , b. r. iyer and b. s. sathyaprakash phys .d * 57 * 885 ( 1998 ) .a. buonanno and t. damour phys .d * 59 * 084006 ( 1999 ) .a. buonanno and t. damour phys .d * 62 * 064015 ( 2000 ) .t. damour , p. jaranowski and g. schfer phys .d * 62 * 084011 ( 2000 ) .t. damour and b.r .iyer and b.s .sathyaprakash , phys . rev .d * 63 * , 044023 ( 2001 ) .the simulations that we performed use different psd curves that are used to compute the inner products ( eq . [ eq : innerproduct ] ) .the different expressions provided uses the quantity , where is the frequency and is a constant .we summarize the different design sensitivity curves that have been used in our simulations together with the lower cut - off frequency : * the ego psd is given by where and hz .the other parameters are : + , , , , , , , , , , , , , and .+ the lower cut - off frequency is hz . * the geo psd is given by } { 1 + 0.5x^2 } \right\ } \end{split}\ ] ] where and hz .the lower cut - off frequency is hz . * the ligo - i psd is given by where and hz .the lower cut - off frequency is hz . * the advanced ligo psd is based on data provided in and given by where and hz .the lower cut - off frequency is hz . * finally , the virgo psd is based on data provided by j - y .vinet and is approximated by where with hz . the lower cut - off frequency is hz .here is a summary of the relationship between individual masses , and the two chirptime parameters and , that are given by where is the lower cut - off frequency of the template / signal , , and .the inversion is straightforward ; and are given by it is convenient to introduce the constants and given by so that eq .[ eq : t0t3 ] becomes finally , the chirp mass , , is given by that allow to be expressed as a function of chirp mass only : the parameter space is defined by three boundaries ( see fig . [fig : space ] ) . on each of these boundaries ,we want to express as a function of . using [ eq : tau03constants ], we can eliminate and express as a function of and : we can also eliminate , and express as a function of and : the lower boundary corresponds to , or . using eq .[ eq : t3t0eta ] , we can express as a function of only {\eta=1/4}.\ ] ] the second boundary is defined by and in ] .on those two boundaries , we can assume that is set to one of the extremity of the mass range , denoted .then , and .starting from we replace by its expression as a function of and , and obtain after some algebra a cubic equation of the form where , and , where is either set to or depending on which side of the parameter space we are .the solution for is standard and is given by we replace , in eq .[ eq : t3t0 m ] to obtain the value of on the boundaries when is provided .
matched filtering is used to search for gravitational waves emitted by inspiralling compact binaries in data from the ground - based interferometers . one of the key aspects of the detection process is the design of a _ template bank _ that covers the astrophysically pertinent parameter space . in an earlier paper , we described a template bank that is based on a _ square lattice_. although robust , we showed that the square placement is over - efficient , with the implication that it is computationally more demanding than required . in this paper , we present a template bank based on an _ hexagonal lattice _ , which size is reduced by 40% with respect to the proposed square placement . we describe the practical aspects of the hexagonal template bank implementation , its size , and computational cost . we have also performed exhaustive simulations to characterize its _ efficiency _ and _ safeness_. we show that the bank is adequate to search for a wide variety of binary systems ( primordial black holes , neutron stars and stellar mass black holes ) and in data from both current detectors ( initial ligo , virgo and geo600 ) as well as future detectors ( advanced ligo and ego ) . remarkably , although our template bank placement uses a metric arising from a particular template family , namely stationary phase approximation , we show that it can be used successfully with other template families ( e.g. , pad resummation and effective one - body approximation ) . this quality of being effective for different template families makes the proposed bank suitable for a search that would use several of them in parallel ( e.g. , in a binary black hole search ) . the hexagonal template bank described in this paper is currently used to search for non - spinning inspiralling compact binaries in data from the laser interferometer gravitational - wave observatory ( ligo ) .
neural maps are a widely ranging class of neural vector quantizers which are commonly used e.g. in data visualization , feature extraction , principle component analysis , image processing , and classification tasks .a well studied approach is the neural gas network ( ng ) .an important advantage of the ng is the adaptation dynamics , which minimizes a potential , in contrast to the self - organizing map ( som ) frequently used in vector quantization problems . in the present paperwe consider a new control scheme for the _ magnification _ of the map bishop97a , claussen2002b , dersch95a , luttrell91a , ritter92a , villmann2000e .controlling the magnification factor is relevant for many applications in control theory or robotics , were ( neural ) vector quantizers are often used to determine the actual state of the system in a first step , which is an objective of the control task .for instance , in was demonstrated that the application of a magnification control scheme for the neural gas based classification system of position and movement state of a robot can reduce the crash probability .another area of application is information - theoretically optimal coding of high - dimensional data as occur in satellite remote sensing image analysis of hyperspectral images which is , in fact , the task of equiprobabilistic mapping .further applications can be found in medical visualization and classification tasks .generally , vector quantization according to an arbitrary -norm can be related to the problem of magnification control as it is explained below .the ng maps data vectors from a ( possibly high - dimensional ) data manifold **** onto a set of neurons , formally written as .each neuron is associated with a pointer **** also called weight vector , or codebook vector .all weight vectors establish the set .the mapping description is a winner take all rule , i.e. a stimulus vector is mapped onto the neuron the pointer of which is closest to the actually presented stimulus vector , neuron is called _winner neuron_. the set called ( masked ) receptive field of the neuron . during the adaptation process a sequence of data points is presented to the map with respect to the stimuli distribution .each time the currently most proximate neuron according to ( [ argmin ] ) is determined , and the pointer as well as all pointers of neurons in the neighborhood of are shifted towards , according to property of being in the neighborhood of is represented by a neighborhood function .the neighborhood function is defined as is defined as the number of pointers for which the relation is valid , i.e. is the winning rank . in particular , for the winning neuron we have .we remark that in contrast to the som the neighborhood function is evaluated in the input space .moreover , the adaptation rule for the weight vectors in average follows a potential dynamics .the _ magnification _ of the trained map reflects the relation between the data density and the density of the weight vectors .for the ng the relation with has been derived .the exponent is called _magnification factor_. for the ng it depends on the _ intrinsic _ dimensionality of the data which can be numerically determined by several methods bruske95a , camastra2003a , camastra2001a , grassberger83a , takens85a . for simplicitywe further require that the ( embedding ) data dimension is the intrinsic one . generally , the information transfer is not independent of the magnification of the map .it is known that for a vector quantizer ( or a neural map in our context ) with optimal information transfer the relation holds .otherwise , a vector quantizer which minimizes the mean distortion error the magnification factor * * ** , i.e. the magnification of a vector quantizer is directly related to the minimization of the description error according to a certain -norm .hence , the ng minimizes the usual distortion error .we now address the question how to extend the ng to achieve an _ a priori _ chosen optimization goal , i.e. an _ a priori _ chosen magnification factor .for the som several methods exist to control the magnification of the map . the first approach to influence the magnification of a learning vector quantizer , proposed in called the _ mechanism of conscience_. for this purpose a bias term is added in the winner rule ( [ argmin ] ) : is the actual winning probability of the neuron and is a balance factor .hence , the winner determination is influenced by this modification .the algorithm should converge such that the winning probabilities of all neurons are equalized .this is related to a maximization of the entropy and consequently the resulting magnification is equal to unity .however , as pointed out by , adding a conscience algorithm to the som does not equate to equiprobabilistic mapping , in general . only for _ very high dimensions _, a minimum distortion quantizer ( such as the conscience algorithm ) approaches an equiprobable quantizer ( - page 93 ) .further , an arbitrary magnification can not be achieved by this mechanism . moreover , numerical studies of the algorithm have shown instabilities . to control the magnification ,a local learning parameter was introduced into the usual som - learning scheme .the now localized learning allows in principle an arbitrary magnification .other authors proposed variants which lead more away from the original som by kernel methods or statistical approaches . for the ng a solution of the magnification control problem can be realized by introducing an adaptive _ local learning _step size according to the above mentioned approach for som .then , the new _ localized _ learning rule reads as the local learning parameters depending on the stimulus density at the position of the weight vectors via brackets denote the average in time , and is the best matching neuron with respect to ( [ argmin ] ) . note , that the local learning rate of the winning neuron is applied in the adaptation step ( [ trn_local_lernen ] ) for each neuron .this approach finally leads to the new magnification law is a modification of the old one .hence , the parameter plays the role of a control parameter .however , in real applications one has to estimate the generally unknown data distribution .usually this is done by estimation of the volume of the receptive fields and the firing rates .this may lead to numerical instabilities of the control mechanism villmann97n , van_hulle00a , villmann99j .therefore , an alternative control mechanism is demanded .recently , a new approach for magnification control of the som was introduced which avoids the -estimation problem .the respective approach is a generalization of a modification of the usual som .it is called ( generalized ) winner relaxing som ( wrsom ) . in winner relaxing som an additional term occurs in weight vector update for the winning neuron , implementing a relaxing behavior .the relaxing force is a weighted sum of the difference between the weight vectors and the input according to their distance rank .the relaxing term was originally introduced in to obtain a learning dynamic for som according to an average reconstruction error including the effect of shifting voronoi borders .it was shown that the generalized winner relaxing mechanism applied in wrsom can be used for magnification control in som , too .thereby , the winner relaxing approach provides a magnification control scheme for som which is _ independent _ of the shape of the data distribution only depending on parameters of the winner relaxing term .we now transfer the generalized winner relaxing approach for som to the ng and consider its influence on the magnification . in complete analogy to the wrsom we add a general winner relaxing term to the usual ng - learning dynamic ( [ allg_lernen ] ) .then the weight update reads as the winner relaxing term is defined as on the additional weighting parameters and .we refer to this algorithm as the _ winner relaxing ng _the original winner relaxing term described in is obtained for the special parameter choice .note , that the relaxing term only contributes to the winner weight vector update as in the original approach .we now derive a relation between the densities and in analogy to for the winner relaxing learning ( winner_relaxing_learning ) .the procedure is very similar as in martinetz93d , villmann2000e .the average change for the winner relaxing ng learning rule ( [ winner_relaxing_learning ] ) is we now consider the equilibrium state , i.e. .for this purpose , we first separate the integral ( [ average_change ] ) into integral is the usual one according to the ng dynamics whereas , are related to the winner relaxing scheme . in the following we treat each integral in a separate manner .thereby we always assume a continuum approach , i.e. the index becomes continuous .hence , for a given input one can find an optimal fulfilling even .doing so , the -integral vanishes in the ( first order ) continuum limit because the integration over only contributes for , but in this case holds .we now pay attention to the -integral : the continuum assumption made above allows a turn over from sum to the integral form in ( [ i3_integral ] ) .the further treatment is in complete analogy to the derivation of the magnification in the usual ng .let be the difference vector winning rank only depends on , therefore we introduce the new variable can be assumed as monotonously increasing with .thus , the inverse exists and we can rewrite the -integral ( [ i3_integral ] ) into d\mathbf{% v}\]]with the matrix only contributes to for the winning weight ( realized by ) , i.e. , for which is equal to according to the continuum approach .hence , the integration over yields rapidly decreases to zero with increasing , we can replace the quantities , by the first terms of their respective taylor expansions around the point neglecting higher derivatives .we obtain corresponds to as the volume of a unit sphere .further , , hence, , the integral in equation ( [ i_3_equation ] ) can be rewritten as the integral terms in ( [ taylor_integral ] ) of odd order in vanish because of the rotational symmetry of . then( [ i_3_equation ] ) yields , neglecting terms in higher order in , with it remains to consider the -integral . as mentioned above , it is identical to the averaged adaptation of the usual ng .hence , the treatment can be taken from there and we get an equivalent equation .taking together ( [ i_1_solution ] ) and ( [ i_3_solution ] ) , the stationary solution of ( [ winner_relaxing_learning ] ) is given by differential equation roughly has the same form as the one for the usual neural gas ( [ i_1_solution ] ) .its solution is given by the exponent the magnification factor .hence , the magnification factor of the wrng can be described also in terms of the magnification of the usual neural gas , that the parameter of the winner relaxing term does not influence the magnification .two direct observations can be immediately made : firstly , the magnification exponent appears to be independent of the additional diagonal term ( controlled by ) for the winner which is in agreement with the wrsom result .therefore again is the usual setting in wrng for magnification control .secondly , by adjusting appropriately , the magnification exponent can be adjusted , e.g. to the most interesting case of maximum mutual information linsker87a , zador82a .maximum mutual information , which corresponds to optimal information transfer , is obtained when magnification equals the unit .hence , we have for this case the optimum value if the same stability borders of the wrsom also are valid here , one can expect to increase the ng exponent by positive values of , or to lower the ng exponent by a factor for .in contrast to the winner enhancing som , where the relaxing term has to be inverted ( ) to increase the magnification exponent , for the neural gas positive values of are required to increase the magnification exponent .however , the magnification factor still remains dependent on the generally unknown ( intrinsic ) dimension of the data .if this dimension is known , the parameter can be set _ a priori _ to obtain a neural gas of maximal mutual information . in this approachit is not necessary to keep track of the local reconstruction errors and firing rate for each neuron to adjust a local learning rate .possibilities for estimating the intrinsic dimension are the well - known grassberger - procaccia - analysis or the neural network approach using again a ng .however , one has to be cautious when transferring the result obtained above ( which would require to increase the number of neurons as well ) to a realistic situation where a decrease of with time will be limited to a final finite value to avoid the stability problems found in .if the neighborhood length in som is kept small but fixed for the limit of fine discretization , the neighborhood function of the second but one winner will again be of order 1 ( as for the winner ) .for the ng however the neighborhood is defined by the rank list . asthe winner is not present in the integral , all terms share the factor by which indicates that in the discretized algorithm has to be rescaled by to agree with the continuum theory .the maximum coefficient that contributes to the integral is given by the prefactor of the second but one winner , which is given by . ]a numerical study shows how the winner - relaxing mechanism is able to control the magnification for optimization of the mutual information of a map generated by the wrng . using a standard setup as in villmann97n of neurons and training steps with a probability density , with fixed and decaying from to , the entropy of the resulting map computed for an input dimension of , and plotted in fig .[ fig_entropy_results ] .thereby , the entropy is computed using the winning probabilty of the neurons: the entropy shows a dimension - dependent maximum approximately at .the scaling of the position of the entropy maximum with input dimension is in agreement with the continuum theory , as well as the prediction of the opposite sign of that has to be taken to increase mutual information .our numerical investigation indicates that the above discussed prefactor , in fact , has to be taken in account for finite and a finite number of neurons .we obtain , within a broad range around the optimal the entropy is close to the maximum given by information theory . in a second numerical studywe investigate the influence of the additional diagonal term ( controlled by ) for the winner .already for the wrsom the magnification exponent is independent of this diagonal term . in the respective derviation ( -integral ( [ i2_integral ] ) ) only first order approximations were used .otherwise , may contribute in higher orders . to verify that the contribution of an additionally added diagonal term is marginal, the entropy was calculated both for and .however , no influence on the entropy was found for the choice instead of .( fig [ fig_independence_result ] ) .more pronounced is the influence of the diagonal term on stability ; according to the larger prefactor no stable behavior has been found for , therefore is the recommended setting .we introduced a winner - relaxing term in neural gas algorithm to obtain a winner - relaxing neural gas with the possibility of magnification control .the winner relaxing scheme is adopted from winner - relaxing som .the new controlling scheme offers a method which is independent on the explicit knowledge of the generally unknown data distribution which is an advantage in comparison to the earlier presented neural gas with localized learning for magnification control .in particular , we avoid the difficult determination of the data probability density by estimation of the volume of the receptive fields of the neuron and the firing rate .numerical simulations show the abilities of the proposed algorithm . c. m. bishop , m. svensen , and c. k. i. williams .magnification factors for the som and gtm algorithms . in _ proceedings of wsom97 ,workshop on self - organizing maps , espoo , finland , june 4 - 6 _ , ( helsinki university of technology , neural networks research centre , espoo , finland , 1997 ) , 333338 .r. w. brause , an approximation network with maximal transinformation , in m. marinaro and p. g. morasso , editors , _ proc .icann94 , international conference on artificial neural networks _ ,volume i , ( london , uk , 1994 .springer),701704 .j. c. claussen and h. g. schuster , asymptotic level density of the elastic net self - organizing feature map , in j. dorronsoro , editor , _ proc . international conf . on artificial neural networks ( icann ) _( springer , berlin 2002 ) , lecture notes in computer science 2415 , 939944 . j. c. claussen and t. villmann , magnification control in neural gas by winner relaxing learning : independence of a diagonal term , in o. kaynak , editor , _ proc . international conference on artificial neural networks ( icann2003 ) _( istanbul , 2003 ) , 5861 . m. herrmann and t. villmann , vector quantization by optimal neural gas , in w. gerstener , a. germond , m. hasler , and j .- d .nicoud , editors , _ artificial neural networks proceedings of international conference on artificial neural networks ( icann97 ) lausanne _ , ( springer , berlin 1997 ) , lecture notes in computer science 1327 , 625630 .r. linsker , towards an organizing principle for a layered perceptual network , in d. z. anderson , editor , _ neural information processing systems _ , ( amer .phys . , new york , ny , 1987 ) , pages 485494 .f. takens . on the numerical determination of the dimension of an attractor , in b. braaksma , h. broer , and f. takens , editors , _ dynamical systems and bifurcations _ , ( springer , berlin , 1985 ) , lecture notes in mathematics no .1125 , 99106 .t. villmann and a. heinze , application of magnification control for the neural gas network in a sensorimotor architecture for robot navigation , in h .-gro , k. debes , and h .- j .bhme , editors , _ proceedings of selbstorganisation von adaptivem verfahren ( soave2000 ) ilmenau _ , fortschrittsberichte des vdi ( vdi - verlag dsseldorf , 2000 ) , 125134 .
an important goal in neural map learning , which can conveniently be accomplished by magnification control , is to achieve information optimal coding in the sense of information theory . in the present contribution we consider the winner relaxing approach for the neural gas network . originally , winner relaxing learning is a slight modification of the self - organizing map learning rule that allows for adjustment of the magnification behavior by an _ a priori _ chosen control parameter . we transfer this approach to the neural gas algorithm . the magnification exponent can be calculated analytically for arbitrary dimension from a continuum theory , and the entropy of the resulting map is studied numerically conf irming the theoretical prediction . the influence of a diagonal term , which can be added without impacting the magnification , is studied numerically . this approach to maps of maximal mutual information is interesting for applications as the winner relaxing term only adds computational cost of same order and is easy to implement . in particular , it is not necessary to estimate the generally unknown data probability density as in other magnification control approaches . + and neural gas , self - organizing maps , magnification control , vector quantization
ip anycast allows a group of geographically distributed servers to share a common ip address .bgp proximity routes traffic to the topologically nearest server .historically , anycast usage has been restricted to stateless udp services such as dns root and top level domain servers , 6-to-4 relay routers , multicast rendezvous points , and sinkholes .recently , we are witnessing the use of anycast with stateful ( tcp ) internet services .in particular , anycast - enabled content delivery networks ( a - cdns ) with geographically large footprints are serving web content using anycast ip addresses for servers . while traditional cdns rely on dns- or http - based redirection mechanisms to direct client requests to the nearest cache , a - cdns rely on ip anycast to select the nearest cache and to perform load - balancing among the different caches. examples of anycast adoption includes both generic cdn providers like cloudflare or edgecast ( recently bought by verizon ) , and dedicated deployments such as the microsoft a - cdn , which serves _bing.com _ and _ live.com _ content .while some a - cdns openly disclose the location of their caches and their status , little is known about the volume of traffic they attract , the services they host , and the performance stability they guarantee . in this paper , we tackle the _ detection _ of a - cdns , and the _ characterisation _ of the traffic towards them .we use complementary methodologies , leveraging active measurements for detection , and passive measurements for characterisation .since our goal is to get a conservative yet representative view of anycast usage in the internet , we map the alexa top-100k most popular websites to ip/24 subnets , and identify 328 /24 a - cdn subnets that use ip anycast .we exploit the recently developed anycast detection technique to geolocate the servers whose addresses belong to anycast subnets . then , to provide a first characterisation of modern usage of a - cdns , we use traffic traces from 20,000 households collected from a large european isp for the entire month of september 2014 . in particular , we quantify the volume of traffic towards those a - cdns , and study the path stability between clients and a - cdn caches . we summarise our main findings as follows : 1 . a - cdns todayare a reality and host popular services . in our dataset, we observe 3% of web traffic towards a - cdns .in addition , approximately 50% of users encounter an a - cdn cache during normal browsing activity .2 . given the relatively small volume of traffic a - cdns have to handle , a - cdns have a small geographical footprint in comparison with traditional cdns .internet paths between a - cdns and clients are stable .the edgecast a - cdn did not witness any routing changes during the entire month , while traffic for other a - cdns revealed few routing events , separated by days of stable configurations .compared to the typical hourly changes observed in traditional cdns , the association between clients and anycast caches is relatively stable . in the remainder of this paper , we first discuss our contributions with respect to the literature on anycast and cdns ( sec .[ sec : related ] ) .then , we present the results of our active measurements and quantify anycast adoption by the cdns supporting the top-100k websites from alexa ( sec . [sec : active ] ) .in addition , we investigate the properties of a - cdns traffic ( sec . [sec : passive ] ) and the stability of routes towards a - cdn servers ( sec .[ sec : affinity ] ) . finally , we conclude with a discussion of open issues ( sec .[ sec : open ] ) .a large body of work in the literature investigates the impact of anycast usage on service performance by measuring server proximity , client - server affinity , server availability , and load - balancing .several studies propose architectural improvements to address the performance shortcomings of ip anycast in terms of scalability and server selection .more recently , there has been a renewed interest in ip anycast and particularly in techniques to detect anycast usage , and to enumerate and geolocate anycast replicas .while in the focus is only on dns servers , in this work , we apply the same anycast enumeration and geolocation technique to form an initial census of anycast ip addresses serving web traffic .closest to our work are the studies that investigate client - server affinity and quantify how often packets from a given client reach the same anycast server .previous efforts studied affinity either by periodically sending probes to anycast addresses and counting server switches , or by inspecting traffic at the anycast servers themselves and counting , for each client ip address , the number of times this ip shows up in multiple servers . with the exception of two studies , previous efforts showed that anycast witnesses rare server switching and maintains good connection affinity .consequently , stateful services could run on top of anycast .yet , most of the existing studies evaluate the performance of anycast with udp services such as dns root and ` .org ` top level domain .one exception is the work of levine et al . which reports positive results from operational experience of running tcp with anycast in cachefly .cachefly is the first cdn company to use tcp anycast . in this paper, we reappraise these results with other popular a - cdns we find in the wild . to the best of our knowledge ,no work in the literature has documented and studied the adoption of anycast by cdn providers .we are thus the first to provide a first look at a - cdn in the internet . ]in this section , we describe the workflow of the active measurement methodology used to detect a - cdns as schetched in fig .[ fig : workflow ] .first , we compile a list of the top-100k alexa websites . from each url, we extract the hostname , and resolve it to ip/32 addresses .we obtain a list of 97,530 unique ip/32 addresses that belong to 50,882 ip/24 subnets .we simultaneously ping all the ip/32 addresses from 250 planetlab nodes ( a single icmp sample per - vp , per - ip/32 for a total of 12.7 m pings ) .next , we ran the anycast detection technique developed in to identify ip/32 anycast addresses ( i.e. , located in more than one geographical location ) .due to lack of space , we defer the reader to for more details about the anycast detection technique . in a nutshell , over these collected measurements, we iteratively run a greedy solver for an optimisation problem to verify if the given ip/32 violates the speed of light constraint : when pinged from two different places , the sum of the rtt can not be smaller than the time light has to spend to go from one probe to the other , i.e. , the physics of the triangular inequality must hold . based then on triangulation , the ip/32 address then is geolocated .the whole process requires less than 3 minutes to complete .this maximises the probability of completing a census during a stationary period of time .we get a list of 708 ip/32 addresses ( 328 distinct ip/24 subnets ) that are anycast and geolocated within a 300 km radius area .these addresses belong to 64 ases .notably , three among the top-100 alexa worldwide ranking are present : _thepiratebay.se_ , _ reddit.com _ both hosted by cloudflare and _ wordpress.com _ hosted by automattic .the website in provides a web - interface that allows the research community to explore our results and in particular the geographical locations of replicas for the anycast ips identified in the top-100k alexa ranking .[ fig : demo ] shows a snapshot of the website .more precisely , it provides an aggregate view of the geographical footprints of discovered a - cdns .interestingly , this dataset is valuable since it reveals ip/24 anycast addresses belonging to more than 67 organisations , including edgecast , cloudflare , google and microsoft .we discuss some issues related to our measurement choices and results here .+ * consistency : * given that ip anycast is based on bgp , it is reasonable to assume that all ip/32 addresses belonging on an anycast ip/24 are also anycast .previous work shows that 88% of the anycast prefixes are /24 or bigger . to confirm this assumption , we run measurements for all ip/32 addresses of a subset of the anycast /24 subnets and obtain results in agreement with this assumption . + * representativeness .* by restraining ourselves to the top-100k list from alexa , we get a _ conservative _ estimate of anycast adoption . to be even more conservative , we filter out anycast ip/24 addresses that are located in only two locations . we prefer to avoid false positives that might arise from wrong geolocation of the planetlab nodes . however , our subset is representative since it is very likely that the most popular websites in the alexa ranking are also the ones attracting a significant number of users and adopt cdns to handle the traffic volume .+ * simplicity .* similarly , as a first step , we opt for simplicity and choose to consider only the ip/32 addresses for the landing pages in the alexa list ( e.g. , google.com/ , facebook.com/ , wikipedia.com/ ) . because websites are complex , the next logical step is to obtain a comprehensive list of the ip addresses of all the servers contacted when users connect to a website .however , this would increase the list of ip/32 addresses to check , inflating the running time of the census .[ cols=">,^,>,^,^,^,^,^,^,>,>,>,>,>,^,^,^ " , ] + ' '' '' ( * ) eu = europe , na = north america , sa = south america , as = asia , af = africa , oc = oceaniahaving got a list of 328 ip/24 anycast subnets , we now leverage passive measurements to characterise the traffic they generate .the process is sketched in the bottom part of fig .[ fig : workflow ] .we instrumented a passive probe at one pop of the operational network of an european country - wide isp .the probe runs tstat , a passive monitoring tool that observes packets flowing on the links connecting the pop to the isp backbone network .tstat rebuilds each tcp flow in real time , tracks it , and , when the connection is torn down , logs more than 100 statistics in a simple text file .for instance , tstat logs the client and server ip addresses , the application ( l7 ) protocol type , the amount of bytes and packets sent and received , etc .tstat implements dn - hunter , a plugin that annotates each tcp flow with the server fully qualified domain name ( fqdn ) the client resolved via previous dns queries .for instance , assume a client would like to access to _it first resolves the hostname into ip/32 address(es ) via dns , getting 123.1.2.3 .dn - hunter caches this information .then , when later at some time the same client opens a tcp connections to 123.1.2.3 , dn - hunter returns _ www.acme.com _ from its cache and associate it to the flow .this is particularly useful for unveiling _ services _ accessed from simple tcp logs . for this studywe leverage a dataset collected during the whole month of september 2014 .it consists 2.0 billions of tcp flows being monitored , for a total of 270 tb of network traffic .1.5 billion connections are due to web ( http or https ) generating 199 tb of data .we observe more than 20,000 customers ip addresses active over the month . among the many measurements provided by tstat, we consider for each tcp flow : ( i ) the minimum round - trip - time ( rtt ) between the tstat probe and the server ; ( ii ) the minimum time - to - live ( ttl ) of packets sent by the server ; ( iii ) the time - to - first - byte ( ttfb ) , i.e. , amount of time between the tcp syn message and the first segment carrying data from the server ; ( iv ) the amount of downloaded bytes carried ; ( v ) the application layer protocol ( e.g. , http , https , etc . ) ; and ( vi ) the fqdn of the server the client is contacting .these metrics are straightforward to monitor , and details can be found in . by restricting our analysis on the ip/24 anycast subnets resulting from our census ,we observe tcp traffic being served by some ip/32 addresses in those .overall , almost 44 million tcp connections are managed by anycast servers . those correspond to approximately 2% of all web connections and 3% of the aggregate http and https volume , for a total of 4.7 tb of data in the entire month .definitively a not - negligible amount of traffic .all traffic is directed to tcp port 80 or 443 , and labelled as http and ssl / tls by tstat dpi .only few exceptions are present , they represent in total about 0.26% of all anycast traffic . these exceptions are mainly related to some ad - hoc protocols for multimedia streaming , email protocols , or dns over tcp . this testifies that today anycast is not anymore used for udp services only , and a - cdns are a reality . to corroborate this , fig .[ fig : any_client ] shows evolution during one - week of the percentage of active customers that have encountered an a - cdn server during their normal web browsing activities . besides exhibiting the classical day / night pattern, the figure shows that at peak time the probability to contact at least one a - cdn server is higher than 0.5 .table [ tab : summary ] presents a summary of the results .it considers the ip /24anycast networks that are contacted by more than 1000 customers . due to spaceconstrains we details only the top 13 most popular subnets with respect to the number of clients contacting them .the remaining are aggregated as `` others '' in the table . for each subnetwe list owner , i.e. , the organisations managing it as returned by whois , and the number of locations found through our active probing methodology , along their presence in different continents .interestingly , we observe well - known players like edgecast and cloudflare , but also almost unknown companies offering a - cdn services to customers .microsoft has its own a - cdn , which has the largest number of locations ( 53 ) , offering a worldwide service .notice how the number of locations is smaller than the one of traditional cdns .for instance , edgecast largest ip/24 anycast subnet appears to have 20 different locations .akamai cdn is instead known to have several thousands .while our census may have not located all possible caches locations , the two orders of magnitude of difference shows that a - cdns are in their early deployment , and we expect the number of locations to grow in the future .for comparison , the google public dns servers network for the 8.8.8.8 network has 55 worldwide resolvers . an other interesting point to studyis whether on owner offer dns load balancing service or not . to study this aspect ,we can not rely on passive measurements only since we have to understand if an fqdn has two or more distinct ip/32 addresses at the same time . by using _host _ we discovered the number of distinct ip/32 addresses offered by each owner to each fqdn . aggregating one - week long tstat logs , we detail information about volume and service offered . for each ip/24 subnet , table [ tab : summary ] reports the distinct a - cdn ip/32 addresses that have been contacted at least once , the total volume of bytes served , the number of flows , of users , and of distinct fqdns .interestingly , we observe a very heterogeneous scenario : the top four ip/24 networks are owned by edgecast , each serving more than 500 gb / month of traffic to more than 10,000 households . thousands of services ( fqdns ) are involved .those include very popular services , like wordpress , twitter , gravatar , tumblr , tripadvisor , spotify , etc .each fqdn is uniquely resolved to the same ip/32 address ( but the same ip/32 address serves multiple fqdns ) .interestingly , this behaviour is shared among most of the studied a - cdns , meaning that they do not rely on dns for load - balancing .an exception is given by cloudflare s networks which offer dns load - balancing .indeed we saw that cloudflare offer up to 8 ip/32 addresses in the same ip/24 for the same fqdn . by look at the left graph of figure [ fig : dns_load_balancing ] we can see the cumulative distribution function ( cdf ) of the number of distinct ips/32 addresses eployed for each fqdn divided by owner . in this first graphwe evaluated the number of ip/32 addressees without take into account to which subnet/24 they belong .e.g. , some fqdns of cloudflare-1 have also ip/32 addressees belonging to cloudflare-2 .as we can see cloudflare ( 1 or 2 ) use more than 1 ip/32 addresses for all their fqdns .the same situation is present by considering only the ip/32 belonging to the same subnet of the owner , as depicated in the left part of the figure . as we can see herethe maximum number of distinct ip/32 addresess drop from 15 to 8 for cloudflare-1 while ip/32 hadled with a single ip/32 address became 54.4% for cloudflare-2 .microsoft directly manages its own a - cdn .we discovered 53 locations , where only 9 ip addresses are present .those serve bing , live , msn , and other microsoft.com services .since it handles quite a small amount of data and flows , we checked if there are other ip/32 servers handling those popular microsoft service in the logs .we found indeed that all of _ bing.com _ pages and web searches are served by the microsoft a - cdn , while static content such as pictures , map tiles , etc .are actually retrieved by akamai cdn .this suggests that microsoft is using both a traditional cdn and its own a - cdn at the same time .next is comodo .it focuses its business on serving certificate validations via ocsp services : lot of customers uses it to fetch little information .only 3 ip/32 addresses have been active from our passive vantage point .note that servers have been located only in europe and north america .highwinds a - cdns instead supports video services for advertisement companies , and images for popular adult content websites .notice the relative longer lived content ( more data , fewer flows ) .cloudflare a - cdn serves both popular website like reddit , and less known services , like specialised forums .a detailed list of the top-10 services for each of the 13 top networks in table [ tab : summary ] is available in appendix [ sec : services ] .other a - cdn providers are present , which serve several tens of thousands of services . in total, 13,014 ip addresses have been found active during the whole month .[ fig : server_growth ] details the discovery process by reporting the number of unique ip/32 addresses discovered over time in the entire month of september 2014 .as shown , the discovery quickly grows during the first days , then the process slows down .surprisingly , after 30 days , the growth is still far from being complete .+ + one of the popular belief about internet routing is that the paths may change quite frequently due to faults , misconfigurations , peering changes , and , for a - cdns , voluntary load - balancing optimisation .thus , anycast services are mostly suitable for datagram services based on udp , while stateful and connection - oriented services using tcp may suffer troubles due to sudden nearest server changes that may cause abrupt interruption of ongoing tcp connections , and loss of consistency on state . in this sectionwe thus look for evidences that hint for possible routing changes . in particular , we look at changes in ip ttl , tcp rtt and time to first byte that may suggest of possible path changes for a given ip/24 subnet .we consider the whole month of september 2014 dataset .the last column of table [ tab : summary ] reports our findings : almost surprisingly , we observe that for the majority of cases we observe no notable change during the entire month .not reported here due to lack of space , this is testified by a practically constant rtt , identical pattern for ttl and ttfb through the entire month .there are four events that we believe it is worth reporting to illustrate some of the changes that we observe .[ fig : events ] reports the detail of the evolution of the rtt , ttl and ttfb for two events .each dot is a measure for a single tcp flow .top plot refers to microsoft a - cdn from the 5 to the 11 of september 2014 .focus on the rtt first .it exhibits a sudden change at midnight of the 5 , when the rtt jumps from 8 ms to 28 ms .it then goes back three days later , after a transient phase during which the rtt gets higher than 150 ms .similar changes are observed in the ttl , where two patterns are clearly visible .we argue the multiple values of the ttl are due to different servers being contacted _ inside _ the internal ip/24 network of the datacenter ( and thus reached through different number of internal routers ) .the ttfb clearly shows the impact on performance when a further location is contacted .the variability in the ttfb depends on multiple factors , including browser pre - opening tcp connections , and server processing time .however , the minimum ttfb is clearly constrained by ( twice ) the rtt .bottom plots in fig .[ fig : events ] show a similar event for netdna a - cdn : during september 2 , the rtt first jumps to 110 ms , then to 23 ms .this corresponds to a change in the ttl and in the ttfb patterns as well .interestingly , the ttl patterns suggests the presence of servers that use different initial values of the ttl : a group of servers chooses 128 ( in green ) , while another group uses 64 ( in red ) .when on the late evening of the 3 of september the routing changes , traffic is routed to a likely different cache , where all servers pick 64 as initial ttl value , i.e. , we observe the green dots to suddenly disappear .changes for the other two events are similar and not reported here for the sake of brevity . in summary , while we observe changes in the anycast path to reach the a - cdn caches , those events are few , and each different routing configuration last for days .this is different from the patterns shows by traditional cdns , where load balancing changes are more frequent .this can be related to the also moderately smaller number of locations , and to the different load balancing policies a - cdn providers are adopting . clearly , a longer study is needed to better quantify the routing changes over time .we presented in this paper a first characterisation of anycast - enabled cdn .starting from a census of anycast subnets , we analysed passive measurements collected from an actual network to observe the usage and the stability of the service offered by a - cdns .our finding unveil that a - cdns are a reality , with several players adopting anycast for load balancing , and with users that access service they offer on a daily basis .interestingly , passive measurements reveal a very stable service , with stable paths and cache affinity properties . in summary ,anycast is increasingly used , a - cdns are prosperous and technically viable .this work is far from yielding a complete picture , and it rather raises a number of interesting questions that we list in the following to stimulate discussion in the community : + * completeness*. we have so far focused on a subset of the anycast ipv4 space .it follows that results conservatively estimate anycast usage , but this also means that more effort is needed to build ( and especially maintain ) an internet - wide anycast census .similarly , dataset spanning over larger period of time and related to more vantage points can enable more general results with respect to the actual characteristics of a - cdns .+ * horizontal comparison with ip unicast*. albeit very challenging , efforts should be dedicated to compare unicast vs anycast cdns for modern web services . to the very least , a statistical characterisation of the pervasiveness of the deployments ( e.g. , in term of rtt ) and its impact on objective measures ( e.g. , time to the first byte , average throughput , etc . )could be attempted .however , many more vantage points than the single one considered in this work would be needed to gather statistically relevant samples from the user population viewpoint . + *vertical investigation of cdn strategies*. from our initial investigation , we noticed radically different strategies , with e.g. , hybrid dns resolution of few anycast ip addresses , use of many dns names mapping to few anycast ips , use of few names mapping to more than one anycast ips , etc . gatheringa more thorough understanding of load balancing in these new settings is a stimulant intellectual exercise which is not uncommon in our community . + *further active / passive measurement integration*. as anycast replicas are subject to bgp convergence , a long - standing myth is that it would forbid use of anycast for connection - oriented services relying on tcp .given our results , this myth seems no longer holding . yet , while we did not notice in our time frame significant changes in terms of ip - level path length , more valuable information would be needed from heterogeneous sources , and by combining active and passive measurements .this work has been partly carried out at lincs .the research leading to these results has received funding from the european union under the fp7 grant agreements no.318627 ( integrated project `` mplane '' ) .http://www.enst.fr/~drossi/anycast .project overview . . .http://automattic.com . .https://www.cloudflare.com/network-map . .https://www.cloudflarestatus.com/. .http://www.edgecast.com / network / map/. .http://status.edgecast.com/. .https://developers.google.com/speed/public-dns/faq#locations .joe abley . .in _ proc . of usenix annual technical conference _ , 2004 . ., 25:814 , 2011 .root server technical operations . .hitesh ballani and paul francis . .in _ proc .acm sigcomm _ , 2005 .hitesh ballani , paul francis , and sylvia ratnasamy . a measurement - based deployment proposal for ip anycast . in _ proc .acm imc _ , 2006 .biet barber , matt larson , and mark kosters .traffic source analysis of the j root anycast instances .nanog , 2006 .biet barber , matt larson , mark kosters , and pete toscano . .nanog , 2004 .peter boothe and randy bush . .caida , 2005 .achefly . .danilo cicalese , danilo giordano , alessandro finamore , marco mellia , maurizio munaf , dario rossi , and diana joumblatt . . ,may 2015 .danilo cicalese , diana joumblatt , dario rossi , marc - olivier buob , jordan aug , and timur friedman . .in _ proc .ieee infocom _ , 2015 .lorenzo colitti . .ripe , 2006 .lorenzo colitti .measuring anycast server performance : the case of k - root .nanog , 2006 . xun fan , john s. heidemann , and ramesh govindan . in _ proc .ieee infocom _ , 2013 .alessandro finamore , ignacio bermudez , and marco mellia . . , 2013michael j. freedman , karthik lakshminarayanan , and david mazires . .in _ proc .usenix nsdi _ , 2006 .howpublished = ietf rfc 3258 title = distributing authoritative name servers via shared unicast addresses year = 2002 hardie , ted . . . ,daniel karrenberg . .nanog , 2005 .dina katabi and john wroclawski . .in _ proc .acm sigcomm _ , 2000 .matt levine , barrett lyon , and todd underwood . .nanog , 2006 .ziqian liu , bradley huffaker , marina fomenkov , nevil brownlee , and kimberly c. claffy . in _ proc . of pam _ , 2007 .doug madory , chris cook , and kevin miao . .nanog , 2013 . kevin miller . .nanog , 2003 .erik nygren , ramesh k. sitaraman , and jennifer sun .the akamai network : a platform for high - performance internet applications ., 44(3):219 , aug 2010 .sandeep sarat , vasileios pappas , and andreas terzis . .in _ proc . icccn _ , 2006 .ruben torres , alessandro finamore , jin ryong kim , marco mellia , maurizio munaf , and sanjay rao .dissecting video server selection strategies in the youtube cdn . in _distributed computing systems ( icdcs ) , 2011 31st international conference on _ , pages 248257 , june 2011 .in this section we report the top 10 services of each of the top 13 networks reported in table [ tab : summary ] .each service is defined by its _second level domain _e.g. , either www.bing.com or bing.it will be bing only . for each services than we report the following data related to the traffic generated during the month of september 2014 : the number of distinct servers that served during the month it , the volume in mb , the number of flows , the number of distinct users who requested the service and finally the distinct number of fqdns e.g. , www.bing.com or bing.it count as two .it is important to remark that the number of distinct servers can be different with respect to the dns load balancing policy explained in section x. here number of ips can be greater since it is evaluated in the whole month instead of a precise moment .therefore , a service might be moved from one server to an other not for load - balancing reason but due to maintenance .service & servers & vol.[mb ] & flows & users & fqdn + digicert & 1 & 1662.75 & 1079308 & 10661 & 5 + wp & 1 & 36905.8 & 834071 & 9076 & 9 + bkrtx & 1 & 1011.81 & 153314 & 8883 & 2 + optimizely & 1 & 6077.63 & 264662 & 8688 & 1 + crwdcntrl & 1 & 1038.8 & 150670 & 8682 & 1 + omniroot & 1 & 16420 & 66105 & 8420 & 2 + w55c & 2 & 361.442 & 100788 & 7804 & 2 + typekit & 1 & 5559.58 & 165720 & 7675 & 3 + edgecastcdn & 55 & 21959.4 & 281715 & 5923 & 291 + sascdn & 1 & 1337.82 & 56819 & 5794 & 2 + service & servers & vol.[mb ] & flows & users & fqdn + twitter & 1 & 45665.3 & 4042986 & 11254 & 4 + gravatar & 1 & 5218.38 & 1387880 & 9961 & 5 + twimg & 4 & 113178 & 1142388 & 9442 & 14 + adrcdn & 1 & 1234.27 & 95104 & 7796 & 1 + tiqcdn & 1 & 782.397 & 120563 & 6897 & 1 + edgecastcdn & 33 & 5867.74 & 162551 & 4871 & 63 + doublepimp & 1 & 1231.38 & 106590 & 4732 & 6 + tumblr & 5 & 305513 & 898342 & 4612 & 34 + exoclick & 1 & 670.502 & 214197 & 4442 & 2 + bstatic & 2 & 20377.3 & 520660 & 4177 & 2 + service & servers & vol.[mb ] & flows & users & fqdn + microsoft & 1 & 24739.8 & 1731861 & 9708 & 11 + msecnd & 3 & 120398 & 776628 & 8890 & 216 + adrcdn & 1 & 81589 & 179684 & 8286 & 1 + aspnetcdn & 1 & 5865 & 117402 & 7649 & 1 + tripadvisor & 1 & 12020.5 & 221128 & 4738 & 1 + w55c & 1 & 761.561 & 33623 & 4557 & 1 + msn & 1 & 1467.56 & 134112 & 4509 & 10 + mozilla & 1 & 31728.3 & 105097 & 3513 & 12 + phncdn & 1 & 456726 & 113367 & 2630 & 2 + edgecastdns & 2 & 239.702 & 7705 & 1753 & 2 + service & servers & vol.[mb ] & flows & users & fqdn + weborama & 1 & 5265.13 & 287956 & 9232 & 4 + jwpcdn & 2 & 5100.11 & 409004 & 8979 & 3 + longtailvideo & 1 & 1126.79 & 109453 & 6221 & 11 + ad4mat & 1 & 322.927 & 67397 & 4824 & 5 + webads & 1 & 422.79 & 35335 & 4115 & 2 + mozilla & 1 & 31575.6 & 96193 & 3376 & 1 + deviantart & 2 & 17367.9 & 55189 & 2782 & 11 + edgecastcdn & 20 & 47051.5 & 49753 & 2392 & 170 + ppstatic & 1 & 2790.91 & 98391 & 2229 & 5 + everyplay & 1 & 11286.9 & 18813 & 1861 & 1 + service & servers & vol.[mb ] & flows & users & fqdn + bing & 2 & 10067.4 & 840891 & 10501 & 41 + microsoft & 1 & 1315.81 & 173235 & 4540 & 1 + live & 6 & 1758.57 & 44898 & 907 & 37 + windowssearch & 2 & 221.675 & 13149 & 893 & 2 + a - msedge & 6 & 157.061 & 8899 & 264 & 34 + msn & 3 & 306.819 & 8572 & 250 & 9 + akadns & 4 & 18.0363 & 611 & 72 & 17 + myhomemsn & 1 & 40.4178 & 1919 & 56 & 1 + msnrewards & 1 & 3.77615 & 207 & 38 & 1 + livefilestore & 1 & 0.0163116 & 2 & 1 & 1 + service & servers & vol.[mb ] & flows & users & fqdn + comodoca & 2 & 1678.81 & 389377 & 9390 & 3 + usertrust & 2 & 246.683 & 113390 & 7859 & 3 + netsolssl & 2 & 24.9833 & 10629 & 2456 & 2 + gandi & 2 & 26.6111 & 12705 & 2395 & 3 + trust - provider & 2 &16.4111 & 17395 & 1948 & 2 + terena & 2 & 30.2878 & 7075 & 939 & 3 + csctrustedsecure & 2 & 4.34491 & 5984 & 677 & 3 + globessl & 2 & 1.79976 & 3017 & 197 & 3 + incommon & 2 & 0.917895 & 336 & 134 & 3 + ssl & 2 & 0.509459 & 266 & 132 & 4 + service & servers & vol.[mb ] & flows & users & fqdn + addtoany & 1 & 541.206 & 57956 & 5404 & 1 + jquerytools & 1 & 315.579 & 16239 & 3761 & 1 + netdna - cdn & 43 & 4121.45 & 80414 & 3385 & 530 + buysellads & 1 & 283.627 & 21806 & 2088 & 2 + popcash & 1 & 8.78808 & 10245 & 1775 & 2 + netdna - ssl & 16 & 1418.63 & 30571 & 1599 & 84 + flowplayer & 2 & 52.0184 & 5622 & 1238 & 2 + fastcdn & 1 & 20.8619 & 5571 & 681 & 1 + feedbackify & 1 & 10.9618 & 3347 & 568 & 1 + chitika & 2 & 5.09706 & 1758 & 546 & 2 + service & servers & vol.[mb ] & flows & users & fqdn + hwcdn & 2 & 106703 & 84692 & 3282 & 72 + adxpansion & 2 & 9567.55 & 165124 & 3041 & 2 + xvideos & 2 & 248916 & 155720 & 2603 & 29 + sexad & 2 & 3217.49 & 36290 & 1918 & 1 + reporo & 2 & 2285.14 & 19235 & 1679 & 2 + adjuggler & 2 & 120.766 & 36262 & 1396 & 4 + camads & 2 & 2733.45 & 23726 & 1183 & 1 + sancdn & 2 & 2146.91 & 10736 & 1156 & 1 + crossrider & 2 & 337.945 & 22467 & 1107 & 4 + nsimg & 2 & 31150.4 & 17954 & 1089 & 2 + service & servers & vol.[mb ] & flows & users & fqdn + netdna - cdn & 28 & 10276.5 & 54751 & 1600 & 355 + datafastguru & 5 & 282.8 & 283056 & 1486 & 4 + cedexis & 2 & 145.48 & 6378 & 1462 & 2 + mdotm & 2 & 526.938 & 3829 & 1128 & 2 + pusher & 1 & 31.99 & 3486 & 674 & 1 + petametrics & 1 & 130.947 & 8736 & 597 & 1 + ad - score & 1 & 250.462 & 70998 & 597 & 1 + engageya & 2 & 707.5 & 94965 & 519 & 4 + revcontent & 1 & 351.279 & 4839 & 503 & 1 + rnbjunk & 1 & 270.919 & 14527 & 480 & 5 + service & servers & vol.[mb ] & flows & users & fqdn + reddit & 8 & 587.413 & 48806 & 3308 & 46 + redditstatic & 5 & 158.645 & 17850 & 1237 & 1 + redditmedia & 10 & 571.371 & 36644 & 629 & 7 + cursecdn & 3 & 5934.29 & 42142 & 449 & 41 + camplace & 3 & 328.093 & 9234 & 444 & 12 + pluginnetwork & 3 & 756.187 & 12587 & 306 & 6 + gfycat & 3 & 6556.39 & 2949 & 275 & 9 + comodo & 3 & 25323.2 & 11266 & 212 & 1 + smugmug & 3 & 487.699 & 1776 & 187 & 107 + diablofans & 3 & 247.697 & 2507 & 135 & 4 + service & servers & vol.[mb ] & flows & users & fqdn + xhcdn & 2 & 159.63 & 531141 & 2911 & 10 + vstreamcdn & 2 & 9.57007 & 29015 & 1128 & 2 + ahcdn & 4 & 4.07057 & 13341 & 770 & 2 + mystreamservice & 2 & 5.2034 & 17473 & 467 & 3 + wildcdn & 1 & 4.47277 & 13075 & 450 & 1 + alotporn & 2 & 1.4562 & 5054 & 430 & 1 + vipstreamservice & 1 & 0.898319 & 2393 & 173 & 1 + tryboobs & 2 & 0.394236 & 1516 & 165 & 1 + inxy & 2 & 0.417435 & 1205 & 105 & 1 + ohsesso & 1 & 1.03889 & 2425 & 100 & 1 + service & servers & vol.[mb ] & flows & users & fqdn + hwcdn & 7 & 11952.7 & 58551 & 1126 & 50 + thestaticvube & 1 & 11281 & 15686 & 745 & 4 + vidible & 1 & 700.347 & 2883 & 548 & 1 + trustedshops & 1 & 11.8071 & 1761 & 197 & 1 + xplosion & 1 & 13.9726 & 4113 & 90 & 3 + chzbgr & 1 & 192.167 & 707 & 53 & 3 + bose & 1 & 127.601 & 475 & 43 & 2 + brainient & 1 & 186.67 & 298 & 32 & 4 + metartnetwork & 1 & 2.53812 & 1470 & 29 & 1 + blaze & 1 & 18.7321 & 262 & 25 & 1 + service & servers & vol.[mb ] & flows & users & fqdn + filmstream & 1 & 1452.82 & 9080 & 490 & 2 + sendapplicationget & 1 & 27.8062 & 6305 & 275 & 7 + mangaeden & 1 & 13713.2 & 11517 & 157 & 5 + feedly & 3 & 221.195 & 14278 & 114 & 4 + mobisystems & 1 & 17.3486 & 215 & 55 & 3 + keep2share & 1 & 5.18796 & 368 & 36 & 4 + mafa & 1 & 121.536 & 436 & 34 & 3 + racing - games & 1 & 32.6952 & 192 & 23 & 2 + switchfly & 2 & 31.5114 & 229 & 22 & 1 + ofreegames & 1 & 48.0902 & 171 & 21 & 2 +
ip anycast routes packets to the topologically nearest server according to bgp proximity . udp - based services ( e.g. , dns resolvers and multicast rendez - vous points ) , which are based on a single request - response scheme , have been historically the first to use ip anycast . while there is a common belief in the internet measurements community that stateful services can not run on top of anycast due to internet path instabilities , in this work we shed some light on the usage of anycast by anycast - enabled content delivery networks ( a - cdns ) . indeed , to the best of our knowledge , little is known in the literature about the nature and the dynamics of these new players . in this paper , we provide a first look at the traffic of a - cdns . our methodology combines active and passive measurements . building upon our previous work , we use active measurements to detect anycast usage for servers hosting popular websites , and geolocate the a - cdn caches . next , we characterise the traffic towards a - cdns in the wild using a month - long dataset collected passively from a european isp . we find that i ) a - cdns such as cloudflare and edgecast serve popular web content over tcp , ii ) a - cdn servers are contacted by users on a daily basis , and ii ) routes to a - cdn servers are stable with few changes observed .
adhesive tapes are routinely used in a variety of situations including daily usage as stickers , in packing and sealing . yet, day - to - day experiences like intermittent peeling of an adhesive tape and the origin of the accompanying audible noise have remained ill understood .this may be partly attributed to the fact that adhesion is a highly interdisciplinary subject involving diverse but interrelated physical phenomena such as intermolecular forces of attraction at the interface , mechanics of contact , debonding and rupture , visco - plastic deformation and fracture , and frictional dissipation which operates during peeling . yetanother reason is that most information on adhesion is obtained from quasistatic or near stationary conditions . apart from scientific interest , understanding the intermittent peel or the stick - slip process has relevance to industrial applications as well .for example , optimizing production schedules that involve pasting or peeling of an adhesive tapes at a rapid pace in an assembly line requires a good understanding of stick - slip dynamics . moreover , insight into the time dependent and dynamical aspects of adhesionis expected to be important in design of adhesives with versatile properties required in variety of applications , in understanding the mechanisms leading to the failure of adhesive joints as also in understanding biologically relevant systems such as the gecko or reorientation dynamics of cells .adhesion tests are essentially fracture tests designed to study adherence of solids and generally involve normal pulling off and peeling .such experiments can be performed under quasistatic or near - stationary and nonequilibrium conditions as well .the latter kind of experiments demonstrate the rate dependence of adhesive properties .it is this rate dependence and the inherent nonlinearity that leads to a variety of instabilities .these kinds of peeling experiments are comparatively easy to setup in a laboratory .moreover , the set up also allows one to record unusually long force waveforms and ae signals that should be helpful in extracting useful information on the nonlinear features of the system .one type of peeling experiment that yields dynamical information is carried out with an adhesive tape mounted on a roller subjected to a constant pull velocity .peeling experiments have also been performed under constant load conditions . at low pull velocities ,the velocity of the contact point keeps pace with the imposed velocity .the same is true at high velocities as well .however , there is an intermediate regime of traction velocities where the peeling is intermittent . peeling in this regimeis accompanied by a characteristic audible noise .it must be stressed that these two stable dissipative branches refer to stationary branches .even so , the stick - slip dynamics observed in the intermediate region of pull velocities has been attempted by assuming an unstable branch connecting the two stable branches .the strain energy release rate shows a power law for low velocities with an exponent around 0.3 .the high velocity branch also shows a power law but with a much higher exponent value of about 5.5 .the low velocity branch is known to arise from viscous dissipation and that at high velocity corresponds to fracture .these studies report a range of wave forms starting from saw tooth , sinusoidal or even irregular wave form that has been termed chaotic .more recently , the dynamics of the peel point has been imaged as well .stick - slip processes are usually observed in systems subjected to a constant response where - in the force developed in the system is measured by dynamically coupling the system to a measuring device .the phenomenon is experienced routinely , for example , while writing with chalk piece on a black board , playing violin or walking down a staircase with the hand placed on the hand - rail .a large number of studies on stick - slip dynamics have been reported in systems ranging from atomic length scales , for instance , stick - slip observed using atomic force microscope to geological length scales like the stick - slip of tectonic plates causing earthquakes .a few well known laboratory scale systems are - sliding friction and the portevin - le chatelier ( plc ) effect , a kind of plastic instability observed during tensile deformation of dilute alloys , to name only two .most stick - slip processes are characterized by the system spending a large part of the time in the stuck state and a short time in the slip state .this feature is observed both in experiment and in models .see for instance .a counter example where the time spent in the stuck is less than that in the slip state ( observed at high applied strain rates ) is the plc effect .these studies show that while the physical mechanisms that operate in different situations can be quite varied , in general , stick - slip results from a competition among the inherent internal relaxational time scales and the applied time scale . in the case of peeling ,one identifiable internal relaxation time scale is the viscoelastic time scale of the adhesive .other relevant time scales that may be operative need to be included for a proper description of the dynamics .all stick - slip systems are governed by deterministic nonlinear dynamics .models that attempt to explain the dynamical features of stick - slip systems use the macroscopic phenomenological negative force response ( nfr ) feature as an input although the unstable region is not accessible .this is also true for models dealing with the dynamics of the adhesive tape including the present work . in this context, it must be stated that there is no microscopic theory that predicts the negative force - velocity relation in most stick - slip situations except in the case of the plc effect where we have provided a dynamical interpretation of the negative strain rate sensitivity of the flow stress ( see below ) .there are several theoretical attempts to model the stick - slip process observed during peeling of an adhesive tape .maugis and barquin , were the first to write down a model set of equations suitable for the experimental situation and to carry out approximate dynamical analysis .these equations were later modified and a dynamical analysis of these equations was reported .however , the stick - slip oscillations were _ not _ obtained as a natural consequence of the equations of motion .indeed , these equations are singular .subsequently , we devised a special algorithm to solve these differential algebraic equations ( dae ) .this algorithm allows for dynamical jumps across the two stable branches of the peel force function .this was followed by converting the dae into a set of nonlinear ordinary differential equations ( ode ) by including the missing kinetic energy of the stretched tape thereby lifting the singular nature of the dae . apart from supporting dynamical jumps, the ode model exhibits rich dynamical features . however , all these studies discuss only contact point dynamics while the tape has a finite width .the ode model has been extended to include the spatial degrees of freedom that is crucial for describing the dynamics of the peel front as also for understanding the origin of acoustic emission .acoustic emission is commonly observed in an unusually large number of systems such as seismologically relevant fracture studies of rock samples , martensite transformation , micro - fracturing process , volcanic activity , collective dislocation motion etc .the general mechanism attributed to ae is the abrupt release of the stored potential energy although the underlying mechanisms triggering ae are system specific .the nondestructive nature of the ae technique has been useful in tracking the microstructural changes during the course of deformation by monitoring the ae signals .for instance , it is used in fracture studies of rock samples and more recently , a similar approach has been used in understanding collective behavior of dislocations . in both these cases , multiple transducers are used to locate the hypocenters through an inversion process of arrival times . in the latter case , by analysing the dislocations sources generating ae signals , the study establishes the fractal nature of the collective motion of dislocations .( in contrast to these dynamical studies , most studies on ae are limited to compiling the statistics of the ae signals in an effort to find experimental realizations of self - organized criticality . )however , in the case of peeling , using multiple transducers is far from easy and only a single transducer is used leading to scalar time series . in such situations ,dynamical information is traditionally recovered using nonlinear time series analysis . however, a major difficulty arises in the present case due to a high degree of noise present and the associated difficulties involved in curing the noise content . despite large number of experimental investigations and to a lesser extent model studies , several issues related to intermittentpeeling and the associated acoustic emission remain ill understood . for instance , there are no models ( even in the general area of stick - slip ) which show that the duration of the stick phase can be equal to or even less than that of the slip phase , a feature which is quite unlike conventional stick - slip dynamics . from a dynamical point of view, this is also suggestive of the existence of at least three time scales .the model represents the acoustic energy in terms of the rayleigh dissipation functional that depends on the local strain rate of the peel front and thus is sensitive to the nature of the peel front dynamics . while preliminary results of the model based on a small domain of parameters were encouraging , no systematic study of the influence of all the relevant time scales on the dynamics of the peel front was carried out .in particular , while the nature of experimental ae signals changes with the traction velocity , the study of the influence of pull speed on internal relaxational mechanisms , the consequent peel front dynamics and its relationship with the acoustic energy was not studied either .the principal objective of the present study is to understand the various contributing mechanisms to the intermittent peel process and its connection to acoustic emission .the objective is accomplished by carrying out a systematic study of the influence of the three internal relaxational time scales namely the two inertial time scales of the tape mass and the roller inertia , and dissipative time scale of the peel front .in particular , we report the influence of the experimentally relevant pull velocity ( covering the entire range ) on the peel front dynamics .these studies show that the model exhibits rich spatiotemporal peel front patterns ( including the stuck - peeled configurations that mimic fibrillar patterns seen in experiments ) arising due to the interplay of the three time scales .consequently , varied patterns of model acoustic signals are seen .another consequence of the inclusion of the three time scales is that it explains the recent observation that the duration of the slip phase can be larger than that of the stick - phase .interestingly , the model studies show that it is possible to establish a correspondence between the various types of model acoustic energy profiles with certain peel front patterns .more importantly , the study shows that even as the acoustic energy dissipated is the spatial average of the local strain rate , it can be noisy suggesting the possible deterministic origin of the experimental acoustic signals . here , we report a detailed analysis of the statistical and dynamical analysis of the experimental ae signals .the study shows that while the intermittent peeling is controlled by the peel force function , acoustic emission is controlled by the dynamics of the peel front patterns that determine the local strain rate .this coupled with a comparative study of a comprehensive nonlinear time series analysis ( tsa ) of the experimental ae signals for a wide range of traction velocities supplemented by a similar study on the model acoustic energy time series provides additional insights into the connection between ae signals and stick - slip dynamics . in particular , the model displays the recently observed experimental feature that the duration of the slip phase can be more than that of the stick phase with increase in the pull velocity .finally , the model studies together with the dynamical analysis of the model acoustic signal provide a dynamical explanation for the changes in the nature of the experimental ae signal in terms of the changes in the peel front patterns .a typical experimental set up consists of an adhesive tape mounted on a roller .the tape is pulled at a constant pull velocity using a motor . a schematic representation of the set up is shown in fig . [ tapewidth](a ) .the axis of the roller passes through the point o into the plane of the paper .the drive motor is positioned at o .let the distance between and be denoted by . is the contact point on the peel front .let the peeled length of the tape be denoted by .several geometrical features can be discussed using a projection on to the plane of the paper .let the angle between the tangent to the contact point and be denoted by and the angle by .then , from the geometry of the fig. [ tapewidth](a ) , we get and where is the diameter of the roller tape .let the local velocity of the peel point be denoted by and the displacement ( from a uniform stuck state ) of the peel front by .then , the pull velocity has to satisfy as the peel front has a finite width , we define the corresponding quantities along the peel front coordinate ( i.e. , along the contact line ) by and .then as the entire tape width is pulled a constant velocity , the above constraint generalizes to = 0 , \label{vconstraint}\end{aligned}\ ] ] where is the width of the tape .however , we are interested in the deformation of the peel front of the adhesive , which is a soft visco - elastic material . for the purpose of modeling , while we shall ignore the viscoelastic nature of the adhesive , we recognize its low modulus , i.e. , we assume an effective spring constant ( along the contact line ) whose value is much smaller than the spring constant of the tape material .this also implies that the force along equilibrates fast and therefore the integrand in eq .( [ vconstraint ] ) can be assumed to vanish for all .thus , the above equation reduces to eq .( [ localconstr ] ) .the present model is an extension of the ode model for the contact point dynamics .the ode model already contains information on the inertial time scale of the tape mass that allows for dynamical jumps across the two branches of the peel force function .the extension involves introducing the rayleigh dissipation functional to deal with acoustic emission apart from introducing the spatial degrees of freedom .the equations of motion for the contact line dynamics are derived by writing down the relevant energy terms consisting of the kinetic energy , potential energy and the energy dissipated during the peel process .the total kinetic energy is the sum of the rotational kinetic energy of the roller tape and the kinetic energy of the stretched part of the tape .this is given by ^ 2 dy + { 1\over2}\int^b_0 \rho \big[\dot u(y ) \big]^2 dy .\label{ke}\ ] ] here , is the moment of inertia per unit width of the roller tape and the mass per unit width of the tape .the total potential energy consists of the contribution from the displacement of the peel front due to stretching of the peeled tape and possible inhomogeneous nature of the peel front .this is given by ^ 2 dy + { 1\over2}\int^b_0 { k_g b } \big[{\partial u(y ) \over \partial y } \big]^2 dy .\label{pe}\ ] ] the peel process always involves dissipation .indeed , the peel force function with the two stable branches , one corresponding to low velocities and another at high velocity arises from two different dissipative mechanisms .apart from this , there is an additional dissipation that arises from the rapid rupture of the peel front which in turn results in the accelerated motion of local regions of the peel front .we consider this accelerated motion of the local slip as the source responsible for the generation of acoustic signals .any rapid movement also prevents the system from attaining a quasistatic equilibrium which in turn generates dissipative forces that resist the motion of the slip .such dissipative forces are modeled by the rayleigh dissipation functional that depends on the gradient of the local displacement rate .indeed , such a dissipative term has proved useful in explaining the power law statistics of the ae signals during martensitic transformation as also in explaining certain ae features in fracture studies of rock sample .then , the total dissipation can be written as the sum of these two contributions ^ 2 dy , \label{diss}\ ] ] where physically represents the peel force function assumed to be derivable from a potential function ( see ref .we denote the second term in eq .( [ diss ] ) by which is identified with the energy dissipated in the form of ae . in the context to plastic deformation , the acoustic energy arising from the abrupt motion of dislocations is given by , where is the local plastic strain rate .following this , we interpret as the energy dissipated in the form of ae signals .note that is the local strain rate of the peel front .as for the first term in eq .( [ diss ] ) , the form of the peel force function we use is given by we stress here that as we are interested in the generic properties of the peeling process , the exact form of the peel force function used here is not important as long as major experimental features like the magnitude of the jump in the velocity across the two branches , the range of values of the measured peel force function , in particular the values at the maximum and minimum , are captured .as can be seen from eq .( [ ke ] ) , there are two time scales ; one corresponding to the inertia of the tape mass and the other due to the roller inertia .in addition , there is a third time scale , namely the dissipative time scale in eq .( [ diss ] ) ( second term ) .thus , there are three internal relaxational time scales in the model .apart from this , there is also a time scale due to the pull speed .then the nature of the dynamics is determined by an interplay among all these time scales .it is more convenient to deal with scaled quantities .consider introducing basic length and time scales which will be used to rewrite all the energy terms in scaled form .a natural choice for a time like variable is with . in a similar way, we introduce a basic length scale defined by , where is the value of at on the left stable branch .we define scaled variables by , , and .the peel force function can be written as . here and are the dimensionless peel and pull velocities respectively with representing the dimensionless critical velocity at which the unstable branch starts . using thiswe can define a few relevant scaled parameters , , , and , where is a unit length variable along the peel front .the parameter is a measure of the relative strengths of the inertial time scale of the stretched tape to that of the roller , the relative strengths of the effective elastic constant of the adhesive to that of the tape material and the strength of the dissipation coefficient .then , the scaled local form of eq .( [ localconstr ] ) takes the form in terms of the scaled variables , the scaled kinetic energy and scaled potential energy can be respectively written as dr , \\\label{scke } u^s_p & = & { 1\over2}\int^{b / a}_0 \big [ x^2(r ) + k_0 \big({\partial x(r ) \over \partial r } \big)^2 \big]dr .\label{scpe}\end{aligned}\ ] ] the total dissipation in the scaled form is .\label{scdiss}\ ] ] the first term on the right hand side is the frictional dissipation arising from the peel force function .the scaled peel force function , , can be obtained by using the scaled velocities in eq .( [ f ] ) .the nature of is shown in fig .[ tapewidth](b ) .note that the maximum occurs at .we shall refer to the left branch ab as the ` stuck state ' and the high velocity branch cd as the peeled state. the second term on the right hand side denotes the scaled form of the acoustic energy dissipated .the lagrange equations of motion in terms of the generalized coordinates and are using this , we get the equations of motion as however , eqs .( [ seqalpha ] , [ sequ ] ) should satisfy the constraint eq .( [ localconstraint ] ) .this consistency can be imposed by using the theory of mechanical systems with constraints .this leads to an equation for the acceleration variable obtained by differentiating eq .( [ localconstraint ] ) and using eqs .( [ sequ ] ) , /v_c . \label{sdotv}\end{aligned}\ ] ] these eqs .( [ localconstraint ] , [ seqalpha ] ) and ( [ sdotv ] ) constitute a set of nonlinear partial differential equations that determine the dynamics of the peel front .they have been solved by discretizing the peel front on a grid of n points and using an adaptive step size stiff differential equations solver ( matlab package ) .we have used open boundary conditions appropriate for the problem .the initial conditions were drawn from the stuck configuration ( i.e. , the values are from the left branch of ) with a small spatial inhomogeneity in such that they satisfy eq .( [ localconstraint ] ) approximately .the system is evolved till a steady state is reached before the data is accumulated .the nature of the dynamics depends on the pull velocity , the dissipation coefficient and .we have carried out detailed studies of the dynamics of the model over a wide range of values of these parameters keeping other parameters fixed at , , ( n / m ) and ( in units of the grid size ) .larger system size is used whenever necessary .one of the objectives is to carry out statistical and nonlinear time series analysis of experimental ae signals associated with the jerky peel process with a view to understand the results on the basis of model studies .acoustic emission data files were obtained from peel experiments under constant traction velocity conditions that cover a wide range of values from to cm / s .signals were recorded at the standard audio sampling frequency of khz ( having khz band width ) using a high quality microphone .they were digitized and stored as bit signals in raw binary files .there are 38 data files each containing approximately points .the ae signals are noisy as in most experiments on ae .two characteristic features of low dimensional chaos are the existence of a strange attractor with self similar properties quantified by a fractal dimension ( or equivalently the correlation dimension ) and sensitivity to initial conditions quantified by the existence of a positive lyapunov exponent . given the equations of motion , these quantities can be directly calculated . however ,when a scalar time series is suspected to be a projection from a higher dimensional dynamics , they are traditionally analyzed by using embedding methods that attempt to recover the underlying dynamics .the basic idea is to unfold the dynamics through a phase space reconstruction of the attractor by embedding the time series in a higher dimensional space using a suitable time delay .consider a scalar time series measured in units of sampling time defined by ] .the delay time suitable for the purpose is either obtained from the autocorrelation function or from mutual information .once the reconstructed attractor is obtained , the existence of converged values of correlation dimension and a positive exponent is taken to be a signature of the underlying chaotic dynamics . in real systems ,most experimental signals contain noise which in this case is high .there are several methods designed to cure the noise component .usually , the cured data sets are then subjected to further analysis .the correlation integral defined as the fraction of pairs of points and whose distance is less than , is given by where is the step function and the number of vector pairs summed .a window is imposed to exclude temporally correlated points .the method provides equivalence between the reconstructed attractor and the original attractor .it has been shown that a proper equivalence is possible if the time series is noise free and long .for a self similar attractor , where is the correlation dimension .then , as is increased , one expects to find a convergence of the slope to a finite value in the limit of small .however , in practice , the scaling regime is found at intermediate length scales due to the presence of noise .the existence of a positive lyapunov exponent is considered as an unambiguous quantifier of chaotic dynamics .however , the presence of superposed noise component , which in the present case is high , poses problems . in principal the noise component can be cured and then the lyapunov exponent calculated . here, we use an algorithm that does not require preprocessing of the data ; it is designed to average out the influence of superposed noise .the algorithm , which is an extension of eckmann s algorithm , has been shown to work well for reasonably high levels of noise in model systems as well as for short time series .the method has been used to analyze experimental time series as well ( for details , see ref . ) . in the conventional eckmann s algorithm , a sequence of tangent matricesare constructed that connect the initial small difference vector to evolved difference vectors , where is the propagation time . in the algorithm ,the number of neighbors used is small typically min $ ] contained in a spherical shell of size .a simple modification of this is to use those neighbors falling between an inner and outer radii and respectively .then , the inner shell is expected to act as a noise filter . however , so few neighbors will not be adequate to average out the noise component superposed on the signal .thus , the modification we effect is to allow more number of neighbors so that the noise statistics is sampled properly .( see for details . ) as the sum of the exponents should be negative for a dissipative system , we impose this as a constraint .in addition , we also demand the existence of stable positive and zero exponents ( a necessary requirement for continuous time systems like ae ) over a finite range of shell sizes . as a cross check, we have also calculated the correlation integral and lyapunov spectrum using the tisean package as well .a systematic study of the dynamics of the model is essential to understand the influence of the various parameters on the spatiotemporal dynamics of the peel front , its connection to intermittent peeling and to the accompanying acoustic emission . from eq .( [ scdiss ] ) , it is clear that the acoustic energy is the spatial average of the local strain rate .as the peel front patterns determine the nature of acoustic energy , a detailed study of the dependence of the patterns on the relevant parameters and on the pull velocity should help us to get insight into ae generation process during peeling .we begin by making some general observation about the various parameters and their influences .the dynamics of the model is sensitive to the three time scales ( reduced from four due to scaling ) determined by the parameters , and . is related to the ratio of inertial time of the tape mass to that of roller inertia ( see below ) .the dissipation parameter reflects the rate at which the local strain rate relaxes .the pull velocity determines the duration over which all the internal relaxations are allowed to occur .the range of is determined by the allowed values of the tape mass and the roller inertia .following our earlier studies , we vary from to and from 0.001 to 0.1 .thus , can be varied over a few orders of magnitude keeping one of them fixed . for model calculations ,the dissipation parameter is varied from 0.001 to 1 .( however , an order of magnitude estimate shows that , see below . ) the range of of interest is determined by the instability domain which is from 1 to as shown in fig .[ tapewidth](b ) . to appreciate the influence of inertial time scale of tape mass parameterized by ,consider the low mass limit of the ode model which has been shown to lead to the dae model equations . in this limit ,the velocity jumps across the two branches of the peel force function are abrupt with infinite acceleration .however , finite tape mass introduces an additional time scale that leads to jumps in to occur over a finite time scale which in turn the magnitude of the velocity jumps .indeed , the phase space trajectory need not jump to the high velocity branch of , as we shall see .this can be better appreciated by considering the ode model ( that ignores the spatial degrees of freedom ) .consider the relevant ode model equations ( in unscaled form ) . , \label{meqn}\end{aligned}\ ] ] where is shown in fig .[ tapewidth](a ) and the displacement of the contact point . is mass of the tape and the spring constant of the tape . from , eqs .( [ ieqn ] , [ meqn ] ) , two inertial time scales can be identified , one corresponding to the roller inertia and another to that of the tape mass .( note that in eq .( [ meqn ] ) of the ode model corresponds to in the present model . )thus , in present model is directly related to the ratio of these two inertial time scales .differentiating eq .( [ localconstr ] ) , we get eq . ( [ meqn ] ) is the force balance equation . in the limit , we have the algebraic constraint .differentiating this equation shows that diverges at points of maximum and minimum of the peel force function .this demonstrates that in the low mass limit , the orbits jump to the high velocity branch abruptly .now consider eq .( [ veqn ] ) that relates the acceleration of the peel point ( ) , acceleration of the displacement i.e. , and .this again is basically a force balance equation as can be seen by multiplying the equation by the tape mass .as the right hand side is small , any increase in one of these acceleration variables implies a decrease in the other variables .as low mass limit implies infinite acceleration of the peel front ( ) across the peel force function , finite mass implies the velocity jumps across the peel force function is reduced .it is worthwhile to note that the effect of inertial time scale causing jumps across the unstable branch to occur at a finite time scale is a general feature .this has been recognized and demonstrated experimentally in the context of the plc effect .now consider estimating the order of magnitude of the dissipative time scale .the unscaled dissipation parameter is related to the fluid shear viscosity and thus an order of magnitude estimate can be obtained .typical values of for adhesives at low shear rates is pa.s . as stress is directly related to shear viscosity , can be estimated using typical dimensions of the peel front .it has been shown that deformed peel front dimension is about 100 , the thickness of the adhesive is 50 and the width of the peel front mm ( width of the tape ) .it is easy to show that j.s .thus , the range of is taking pa.s . as some of the numbers used are materialdependent , this is just an order of magnitude estimate . for model studies ,the range of is taken to be from 1 to 0.001 .however , we will not discuss the results for as these are similar to 0.01 .within the scope of the model , the model acoustic energy given by ( in the discretized form ) depends on the nature of the local displacement rate .based on this relation , some general observations can be made on the nature of and its dependence on the peel front dynamics . from eq .( [ sequ ] ) , high implies that the coupling between neighboring sites is strong and hence the local dynamics at one spatial location has no freedom to deviate from that of its neighbor .thus , the displacement rate at a point on the peel front can not differ from that of its neighbor .for the same reason , low implies weak coupling between displacement rates on neighboring points on the peel front which therefore can differ substantially .this clearly should lead to significantly more inhomogeneous peel velocity profile .based on the above arguments , high should lead to smooth peel front and consequently sharp bursts in the model acoustic energy that occurs during jumps between the two branches of .in contrast , when is small , should be high as also spread out in time .however , as the exact nature of the peel front pattern is sensitive to the values of , pull velocity and , the nature of depends on all the three time scales .indeed , one should expect that the more rapidly the peel front patterns change with time , the noisier the model acoustic energy should be .this is one feature that we hope to compare with experimental acoustic signals .we have carried out extensive studies on the nature of the dynamics for a wide range of values of the parameters stated above .the peel front dynamics is analyzed by recording the velocity - space - time patterns of the peel front , the phase plots in the plane for an arbitrary spatial point on the peel front and the model acoustic energy dissipated .( unless otherwise stated , these plots refer to steady state dynamics after all the transients have died out . ) here , we present a few representative solutions for different sets of parameters within the range of interesting dynamics .our analysis shows that while the nature of the dynamics results from competing influences of the three time scales , the dissipation parameter appears to have a significant influence on the spatiotemporal dynamics of the peel front .given a value of there is a range of values of . in this case , the set of values are : and . the dissipation coefficient is varied from to 0.01 . for high ,only smooth peeling is seen independent of the magnitude of the pull velocity .the peel front switches between the low and high velocity branches of the peel force function .plots of the smooth nature of the entire peel front are shown in figs .[ i5m3v1gu1](a , b ) for .figure [ i5m3v1gu1](a ) shows the nature of the peel front when the system is on the low velocity branch of , i.e. , the local velocities of all spatial elements follow the ab branch of . the small amplitude synchronous high frequency oscillation of the entire peel front results from the roller inertia .( compare the values of in the two figures . ) a phase plot in the plane for an arbitrary point on the peel front is shown in fig .[ i5m3v1gu1](c ) .the small amplitude oscillation of the peel front shown in fig .[ i5m3v1gu1](a ) corresponds to the velocity oscillations in the phase plot ( fig .[ i5m3v1gu1](c ) ) .as high implies relatively low values of , it can be shown ( on lines similar to ref . ) that the orbit sticks to the stationary branches ( slow manifold ) of the peel force function jumping between the branches only at the limit of stability typical of relaxation oscillations .the corresponding model acoustic energy shows a sequence of small amplitude spikes corresponding to the small amplitude oscillations arising from the roller inertia [ fig .[ i5m3v1gu1](d ) ] followed by large bursts that occur at regular intervals .the bursts result from the peel front jumping from the stuck to the peeled state and back .note that the duration of the bursts are short compared to duration between them .however , as we decrease to keeping , we observe rugged and stuck - peeled configurations .the rugged pattern is seen when the system is on the ab branch of .even so , on reaching the limit of stability , the entire contact line peels nearly at the same time as shown in fig .[ i5m3v1gu1](e ) .but once it jumps to the high velocity branch cd of , the peel front that has nearly uniform peel velocity commensurate with that of the right branch of becomes unstable and breaks up into stuck and peeled segments as shown in fig .[ i5m3v1gu1](f ) .the width of these segments increases in time with a concomitant decrease in the magnitude of the velocity jumps of peeled segments , eventually the entire peel front goes into a stuck state .then , the cycle restarts with the peel front switching between the rugged and stuck - peeled ( sp ) states .the phase plot is similar to that for again sticking to the slow manifold .the model acoustic energy dissipated is also similar to that for except that the large bursts are comparatively broader as should be expected due to presence of stuck - peeled configurations that contribute to large changes in the local velocity .as we decrease to 0.01 , the observed patterns are similar to those for but the sequence of the peel front patterns is different . starting with a low velocity configuration that is even more rugged compared to that for as shown in fig .[ i5m3v1gu0_1](a ) , the peel process starts with a small stuck segment getting peeled [ fig .[ i5m3v1gu0_1](b ) ] . there after , several stuck segments peel out leading to a stuck - peeled pattern as shown in fig .[ i5m3v1gu0_1](c ) , eventually , the entire peel front peels - out leaving a nearly uniform peeled state as shown in fig . [ i5m3v1gu0_1](d ) ( with a velocity commensurate with the high velocity branch of ) .this is again destabilized with some segments of the peel front getting stuck as in the case of ( similar to fig .[ i5m3v1gu1](f ) ) .the number of such stuck segments increases with time , eventually the whole peel front goes into a stuck state .the cycle restarts .the phase plot is similar to and 1 .indeed , for a given , independent of the phase plot changes only when is increased .but shows broader bursts compared to as the corresponding stuck - peeled configurations last longer .even so , the duration of the sp configurations in a cycle is short , i.e. , the duration of the bursts is short compared to the duration between them .now we consider the influence of increasing the pull velocity ( keeping fixed at 7.88 ) which in turn should leave less time for internal relaxational mechanisms to operate .intuitively one should expect that some patterns observed for low may not be seen for higher values of . case is uninteresting for the reasons stated above .but , reducing to 0.1 does provide some degree of freedom for the local dynamics to operate at each point .even so , for , the peel front switches between a sp configuration with most segments momentarily in the stuck state ( similar to fig .[ i5m3v1gu0_1](b ) ) and a configuration that has several stuck - peeled segments ( similar to fig .[ i5m3v1gu0_1](c ) ) .the corresponding phase plot shows that the orbit jumps moves slightly beyond the upper value of and jumps back from the right branch even before reaching the minimum of ( not shown ) . as we decrease to 0.01 ,the rugged configuration seen for is no longer seen and only sp configurations are observed as shown in fig .[ i5m3v2gu0_1_01](a ) .the sp configurations are dynamic in the sense , segments that are in stuck state at one time become unstuck at a later time and vice versa . for this case ( ) , these rapid changes occur over a short time scale .consequently , the model acoustic energy is quite noisy as shown in fig .[ i5m3v2gu0_1_01](b ) but has a noticeable periodic component .the points of minima correspond to configurations that have fewer peeled segments compared to those near the peak of .the phase plot in the plane is limited to the upper part of .even as the phase plots for any two spatial points look similar , there is a phase difference . for instance , at any given time , the phase point of stuck segment will be on the left branch while that for peeled point will be on the right branch .as we increase to ( keeping at ) , there is even lesser time for peel front inhomogeneities to relax and thus , we observe a smooth peeling for and as also for .as we decrease to , we see only sp patterns ( not shown but similar to fig .[ i5m3v2gu0_1_01](a ) ) . the corresponding phase plot for an arbitrary point on the peel front shown in fig .[ i5m3v2gu0_1_01](c ) is confined to the top of .the corresponding is noisy and irregular as shown in fig .[ i5m3v2gu0_1_01](d ) . however , when we increase to , initially , one does observe the patterns switching between rugged and sp configurations .if we wait long enough , we observe only sp configurations that are different from those for lower . in this case , the stuck and peeled segments are long lived .a top view of the sp pattern is shown in fig . [ i5m3v2gu0_1_01](e ) .the phase space orbit in the plot is pushed beyond the upper limit of .the energy dissipated is quite regular ( but aperiodic ) unlike that for lower pull velocities as shown in fig .[ i5m3v2gu0_1_01](f ) .this regularity is clearly due to the long lived nature of these sp configurations .the long lived nature of the sp configurations for high pull velocity is a general feature , i.e. , the duration over which the stuck segments remain stuck ( peeled segments remain peeled ) increases as we increase the pull velocity .the dynamics is no longer interesting beyond as only smooth peeling is seen .for this value of , the allowed set of values of are and .the dynamics is more interesting for this case as there is a scope for competition among the three time scales .we first study the dynamics keeping and varying the dissipation parameter . for 1.0 ,the uniform nature of the peel front seen for disappears and even for short times , stuck - peeled configurations are seen .the peel front patterns stabilize to stuck - peeled configurations as shown in figs .[ i2m1v1gup1](a , b ) . as can be seen these sp patterns have only a few stuck or peeled segments with moderate velocity jumps and smooth variation along the peel front unlike the sp configurations discussed earlier .( note that the sp configuration in fig .[ i2m1v1gup1](b ) has more stuck segments compared to fig .[ i2m1v1gup1](a ) . )the moderate velocity jumps can be understood by noting that the phase space orbit never visits the high velocity branch of as can be seen from fig .[ i2m1v1gup1](c ) .it is interesting to note that the trajectory stays close to the unstable branch of even after attempting to jump from the low velocity branch .such orbits are reminiscent of canard type solutions .the trajectory is irregular and is suggestive of spatiotemporal chaotic nature of the peel front .the energy dissipated shown in fig .[ i2m1v1gup1](d ) is continuous and irregular due to the dynamic sp pattern as should be expected , but there is a noticeable periodic component .the rough periodicity of can be traced to fact that the peel front configurations switch between patterns with more stuck segments and less stuck segments .( from the number shown on -axis , fig .[ i2m1v1gup1](a ) can be identified with minimum and fig .[ i2m1v1gup1](b ) with the peak of in fig .[ i2m1v1gup1](d ) .see the marked arrows as well . ) as we decrease to 0.1 , the sp configurations observed have more stuck and peeled segments compared to ( compare fig .[ i2m1v1gup0_1](a ) with fig .[ i2m1v1gup1 ] ( a ) ) .however , the magnitude of the velocity jumps remains moderate as in the previous case .this is again due to the fact that the orbit never visits the high velocity branch of .( recall that given a value of , the phase plot remains the same for different values as long as is fixed ) .indeed , for this value of , the orbit never jumping to the high velocity branch is a consequence of finite inertia of the tape mass compared to that of the roller inertia as discussed earlier . for this case ,the model acoustic energy is also irregular and continuous as shown in fig .[ i2m1v1gup0_1](b ) with a noticeable periodic component .now , if we decrease further to 0.01 , the peel front pattern displays increased number of stuck and peeled segments with each stuck segment having only a few contiguous stuck points as can be seen from fig .[ i2m1v1gup0_1](c ) .note also that there is a large dispersion in the magnitudes of the velocity jumps of the peeled segments even as the largest one is significantly smaller than the value of cd branch of . as can be seen from the fig .[ i2m1v1gup0_1](c ) , even though the pattern is dynamic , the segments that are stuck are barely so . thus , the configuration shown in fig .[ i2m1v1gup0_1](c ) gives the feeling of a critically poised state .the corresponding phase plot ( similar to that shown in fig .[ i2m1v1gup1](c ) ) is irregular and possibly suggestive of spatiotemporal chaotic nature of the peel front .the acoustic energy is very irregular without any trace of periodicity as shown in fig .[ i2m1v1gup0_1](d ) .we now consider the influence of increasing the pull velocity . as we increase to 2.48 , the spatiotemporal patterns seen for , 0.1 and 0.01 are slightly different from those for . for ,the peel process goes through a cycle of configurations shown in figs .[ i2m1v2gup1](a , b ) .it is clear that fig .[ i2m1v2gup1](a ) has more segments in the stuck state while fig .[ i2m1v2gup1](b ) is the usual kind of sp configuration except that the stuck and peel segments are fewer . for this case , the stuck and peeled segments last longer than those for .the corresponding for each exhibits noisy bursts overriding a periodic component . a typical plot for is shown in fig .[ i2m1v2gup1](c ) . from the time labels as also the arrows shown , the minima and maxima in can be identified with figs .[ i2m1v2gup1](a , b ) respectively .the orbit in the plane moves into regions much beyond the values allowed by as is clear from fig .[ i2m1v2gup1](d ) .the phase plots for and 0.01 are similar to this case . for , the peel front pattern goes through a cycle of stuck - peeled configurations ( with more stuck and peeled segments than for ) and stuck segments ( similar to fig .[ i2m1v2gup1](a ) ) . yet , the energy dissipated is similar to fig .[ i2m1v2gup1](c ) for which is surprising considering that there are more stuck and peeled segments compared to case .this can be traced long lived of the stuck or peeled configurations that hardly change over a cycle ( as in the case of , see fig .[ i5m3v2gu0_1_01](e ) ) .the peel process is similar even for .as we increase the peel velocity to 4.48 , the influence of this time scale on the peel front pattern is discernable even for .the spatiotemporal patterns of the peel front switches sequentially from nowhere stuck configuration shown in fig .[ i2m1v4gup1](a ) to stuck - peeled configuration with few stuck and peeled segments shown in fig .[ i2m1v4gup1 ] ( b ) .note that there are very few stuck and peeled segments .the corresponding exhibits noisy periodic pattern similar to fig .[ i2m1v2gup1](c ) for . the phase plot in fig . [ i2m1v4gup1](c ) shows that the orbit can move much beyond the values allowed by .as we decrease to 0.1 , the nowhere stuck configuration [ fig .[ i2m1v4gup1](a ) ] is replaced by a partly stuck , partly peeled configuration and a sp configuration . for case ( fig .[ i2m1v4gup1](c ) ) , the phase plot is slightly different as the orbit makes several loops before it jumps to low velocity branch without visiting the high velocity branch of .the nature of is still noisy and periodic similar to fig .[ i2m1v2gup1](c ) .as we decrease to 0.01 , the peel process goes through sp configurations shown in figures .[ i2m1v4gup1](d , e ) .note that fig .[ i2m1v4gup1](e ) has large dispersion in the magnitude of velocity jumps of the peeled segments compared to that in fig .[ i2m1v4gup1](d ) .it is worth emphasizing that the increase in the number of stuck and peeled segments with decrease in is a general feature . despite the higher number of stuck and peeled segments , for is similar to that for as these peel front configurations are long lived which again is a general feature observed at high pull velocities . finally , it should be stated that for , in general the velocity variation along the peel front is much more smooth compared to other values of .the dynamics is uninteresting beyond as only smooth peeling is seen .for this value of , there is just one set of values of tape mass and roller inertia , namely , and .as the tape mass is low , this also corresponds to the dae type of solutions for each spatial point .thus , the velocity jumps between the two branches of the peel force function will always be abrupt with the roller inertia playing a major role in allowing the orbits to jump between the branches of the peel force function as demonstrated earlier .consider the influence of the dissipation parameter keeping .for , peeling is uniform and thus the whole peel front switches between the two branches of the peel force function .the acoustic energy shows a bunch of seven double spikes that appear at regular interval as shown in fig .[ i2m3v1gup01](a ) .( the number of spikes is correlated with the number of cascading loops seen in the phase plot , see below . ) as we decrease to 0.01 , the peel front goes through a cycle of patterns with only few peeled segments and those with large number of stuck - peeled segments as shown in figs .[ i2m3v1gup01](b ) and ( c ) respectively . the phase plot in the plane of an arbitrary point on the peel front jumps between the to branches of . as shown in fig .[ i2m3v1gup01](d ) , in a cycle , the trajectory starting at the highest value of stays on for a significantly shorter time compared to that on the left branch .the orbit then cascades down through a series of back and forth jumps between the two branches of .( for , independent of value , the nature of the phase plot is the same with seven loops . )the corresponding model acoustic energy consists of rapidly fluctuating time series with an overall convex envelope of bursts separated by a quiescent state as shown in fig . [i2m3v1gup01](e ) .( contrast this with fig .[ i2m3v1gup01](a ) for . ) from the time labels in figures .[ i2m3v1gup01](b , c ) , both configurations belong to the region within the bursts [ fig .[ i2m3v1gup01](e ) ] . to understand this complex pattern of bursts in we have looked at the fine structure of each of these bursts along with the evolution of the associated configurations .one such plot is shown in fig .[ i2m3v1gup01](f ) which shows that fine structure consists of seven bursts within each convex envelope .these seven bursts can be correlated with the seven loops in the phase plot shown in fig .[ i2m3v1gup01](d ) .the time interval marked lm in the phase plot corresponds largely to stuck configuration ( not shown ) and hence can be easily identified with the quiescent region in .following the peel front patterns continuously , it is possible to identify the sequence of configurations that leads to the substructure shown in [ fig .[ i2m3v1gup01](f ) ] .for instance , the loop marked pqrst in the plot corresponds to the burst between p and t in fig .[ i2m3v1gup01](f ) . during this period, the configuration at p is largely in the stuck state ( as in fig .[ i2m3v1gup01](b ) ) which gradually evolves with more and more segments peeling out [ fig .[ i2m3v1gup01](c ) ] as the trajectory moves from . as the numberof stuck and peeled segments reaches a maximum , reaches the peek region .then , during the interval corresponding to s to t the number of peeled segments decreases abruptly .thereafter , the next cycle of configurations ( corresponding to the next loop in the phases plot ) ensues .as we increase the pull velocity , the peel front is smooth for as also for 0.1 for the entire range of pull speeds .however , for , as we increase to 2.48 , the peel process goes through a cycle of sp configurations shown in figs .[ i2m3v2gup0_01](a , b ) .note that there is a large dispersion in the jump velocities as is clear from fig .[ i2m3v2gup0_01](a ) .the corresponding shows rapidly fluctuating triangular envelope of bursts with no quiescent region seen for case .this is shown in fig .[ i2m3v2gup0_01](c ) .the corresponding phase plot is similar to fig .[ i2m3v1gup01](d ) but has twelve loops .in addition , the value of the upper loop extends far beyond that allowed by .we also see a fine structure similar to that in fig .[ i2m3v1gup01](e ) . as in the previous case , it is possible to identify configurations that correspond to minima and near the maxima of .as we increase further to 4.48 , only sp configurations are seen .the energy dissipated shows continuous bursts overriding a sawtooth form as shown in fig .[ i2m3v4gup0_01](b ) .the phase plot shows large excursions way beyond the peel force function values as shown in fig .[ i2m3v4gup0_01](c ) .a general comment may be relevant regarding large excursions of the trajectory in the phase plot as we increase the pull velocity .this is easily explained for the low ( low tape mass , high roller inertia ) .it is clear from eq .( [ meqn ] ) that we have . as can takes on positive and negative values , one can see and are determined by minimum ( negative ) and maximum values of as argued in .it is possible to extend this argument to finite tape mass case .finally it must be stated that the dynamics is no longer interesting beyond ..largest lyapunov exponent for the model for various parameter values . for all ,the lle reaches a near value zero for .[ cols="<,<,<,<",options="header " , ] in section iv , we showed that the peel front patterns for several sets of parameters are spatiotemporally chaotic .more importantly , the model acoustic energy is quite irregular even as it is of dynamical origin .this suggests the possibility that the experimental ae signals could be chaotic .however , often time series have undesirable systematic component , which needs to be removed from the original data .for instance , in the plc effect , the stress - strain time series has an overall increasing stress arising from the work hardening component of the stress which needs to be subtracted . in the present case ,the experimental data for high pull velocities does show a background variation .a simple way of eliminating this background component is to use a window averaging and subtract this component from the raw data .moreover , as stated in the introduction , the experimental ae data are quite noisy and therefore it is necessary to cure the data ( using standard noise reduction techniques ) before subjecting them to further analysis .simple visual checks for the existence of chaos such as phase plots , power spectrum etc . have been carried out .we have also used singular value decomposition , false neighbor search etc .figures [ dyinv](a , b ) show the raw and cured data respectively for cm / s .clearly , the dominant features of the time series are retained except that small amplitude fluctuations are reduced or washed out .statistical features like the distribution function for the amplitude of the ae signals , power spectrum etc . are not altered .for instance , the two stage power law distribution for the amplitude of ae signals for the raw data ( shown in fig . [ expt_dist](b ) ) is retained except that the exponent value for the small amplitude regime is reduced from to without altering the exponent corresponding to large amplitudes .this reduction is understandable as small amplitude fluctuations are affected during curing .the cured data are used to calculate the correlation dimension for all the data files .however , for calculating the lyapunov spectrum using our algorithm , raw data is adequate as our algorithm is designed to process noisy data .( in contrast , calculating the lyapunov spectrum using the tisean package requires the cured data ) . to optimize the computational time ,all our calculations are carried out using one fifth of each data set as each file contains large number of points points , and there are 38 data sets .typical autocorrelation time is about four units in sampling time .however , using a smaller value of , we have calculated the correlation integral for all the data files. converged values of correlation dimension are seen only in the region of pull velocities in the subinterval 3.8 to 6.2 cm / s . a log - log plot of for the pull velocity cm / s is shown in fig .[ dyinv](c ) for to .a scaling regime of more than three orders of magnitude is seen with .this is at the beginning of the chaotic window .we have calculated the lyapunov spectrum using our algorithm . the lyapunov spectrum for / s is shown in fig .[ dyinv](d ) .( the outer shell radius is kept at . )note that the second exponent is close to zero as should be expected of continuous flow systems . using the spectrum ,we have calculated the kaplan - yorke dimension ( also called lyapunov dimension ) using the relation .the value so obtained in each case should be consistent with that obtained from the correlation integral .for the case shown in fig .[ dyinv](d ) we get consistent with .( typical error bars on the first three lyapunov exponents are and . thus the errors in values are . ) as an example of converged value of correlation dimension near the upper end of the chaotic domain , a log - log plot of for cm / s is shown in fig .[ dyinv](e ) with for to 10 .again , the scaling regime is seen to be nearly three orders of magnitude .the lyapunov spectrum for the data file is shown in fig .[ dyinv](f ) .the calculated lyapunov dimension from the spectrum is which is again consistent with .the values of for all the files are found to be in the range to as can be seen from table [ exptstat ] .we have calculated the lyapunov spectrum for the full range of traction velocities and we find ( stable ) positive and zero exponents only in the region 3.8 to 6.2 cm / s , consistent with the range of converged values of as can be seen from table [ exptstat ] .the corresponding values of are in the range of to .we have also calculated the lyapunov spectrum using the tisean package using cured files .the values obtained from the tisean package are uniformly closer to the values , typically .finally , we note that the positive exponent decreases toward the end of the chaotic domain ( 6.2 cm / s ) . these results ( see table [ exptstat ] ) show unambiguously that the underlying dynamics responsible for ae during peeling is chaotic in a mid range of pull speeds . in order to compare the low dimensional chaotic nature of the experimental ae signals with the model acoustic signal, we have analyzed the low dimensional dynamics of using the embedding procedure after subtracting the periodic component .we have computed the correlation dimension and lyapunov spectrum for the entire instability domain . a log - log plot of the is shown in fig .[ dyinvmodel](a ) for to 8 .the convergence over more than three orders of magnitude is clear . the value of . for this file ,we find stable positive and zero exponents for a range of values . a plot of the spectrum for and ( ) is shown in fig .[ dyinvmodel](b ) . using thiswe get which is again consistent with .we have calculated both correlation dimension and lyapunov spectrum of for a range of values of the parameters . for each , we find converged values of and within a window of pull speeds .generally , the range of is between 2.15 to 2.70 while is in the range 2.4 to 2.90 .table [ rae2 ] shows the values of correlation dimension and for various sets of parameter values .it is interesting to note that the magnitude of the largest exponent for the model ae signal also decreases as we increase the pull velocity , a feature displayed by the experimental time series as well .in summary , the present investigation is an attempt to understand the origin of the intermittent peeling of an adhesive tape and its connection to acoustic emission ._ at the conceptual level , we have established a relationship between stick - slip dynamics and the acoustic energy _ , the latter depends on the local strain rate which in turn is controlled by the roughness of the peel front . as the model is fully dynamical , one basic result that emergesis that the model acoustic energy is controlled by the nature of spatiotemporal dynamics of the peel front .further , even as the model acoustic emission is a dynamical quantity , the nature of turns out to be quite noisy depending on the possible interplay of different time scales in the model .thus , the highly noisy nature of the experimental signals need not necessarily imply stochastic origin of ae signal ; instead , they could be of deterministic origin .this motivated us to carry out a detailed analysis of statistical and dynamical features of the experimental ae signals .despite the high noise content , we have been able to demonstrate the existence of finite correlation dimension and positive lyapunov exponent for a window of pull speeds . the kaplan - yorke dimension ( for various traction velocities ) calculated from the lyapunov spectrumis consistent with the value obtained from the correlation integral .thus , the analysis establishes unambiguously the deterministic chaotic nature of the experimental ae signals .interestingly , the largest lyapunov exponent shows a decreasing trend toward the end of the chaotic window , a feature displayed by the model acoustic signal as well .the work also addresses the general problem of extracting dynamical information from noisy ae signals .a similar analysis of the model acoustic energy shows that is chaotic for a range of parameter values ._ more importantly , several qualitative features of the experimental ae signals such as the statistics of the signals and the change from burst to continuous type with increase in the pull velocity are also displayed by . _ the observed two stage power law distribution for the experimental ae signals [ fig .[ expt_dist](b ) ] is reproduced by the model [ fig .[ expt_dist](d ) ] .it must be emphasized that this power law distribution for the amplitudes is completely of dynamical origin .this result should be of general interest in the context of dynamical systems as there are very few models that generate power laws purely from dynamics .the only other example known to the authors is that of the plc effect where the amplitude of the stress drops shows a power law distribution within the context of the ananthakrishna model .the spatiotemporal patterns of the peel front are indeed rich and depend on the interplay of the three time scales .although , the nature of spatiotemporal patterns is quite varied , they can be classified as smooth synchronous , rugged , stuck - peeled and even nowhere stuck patterns . as expected on general consideration of dynamics ,rich patterns are observed for the case when all the three time scales are of similar magnitude ( illustrated for ) .all spatiotemporal patterns , except the smooth synchronous peel front are interesting . as a function of time, the nature of the peel front can go through a specific sequence of these patterns ( depending on the parameter values ) .the most interesting pattern is the stuck - peeled configuration which is reminiscent of fibrils observed in experiments . even among the sp configurations , there are variations , for example , rapidly changing , long lived , edge of peeling etc . despite the varied range of patterns , a few general trends of the influence of the parameters on the peel front patterns are worth noting .first , in general the number of stuck and peeled segments increases as is decreased .second , as the pull velocity is increased , the rapidly varying stuck - peeled configurations observed at low pull velocities become long lived . the dynamical signature of these two parameters are reflected in the nature of the phase space orbit .for instance , given a value of and , the nature of the phase space orbit changes only when is increased which allows the orbit to move way beyond the values of .the study of the model shows that while the intermittent peeling is controlled by the peel force function , the dynamics of the peel front is influenced by all the three time scales .this together with the dynamical analysis of the experimental acoustic emission signals establishes that deterministic dynamics is responsible for ae during peeling .the various sequences of peel front patterns and their time dependences lead to quite varied model acoustic signals .these can be classified as bunch of spikes , isolated bursts occurring at near regular intervals , continuous bursts with an overall envelope separated by quiescent state , continuous bursts overriding a near periodic triangular form , irregular waveform overriding a periodic component , and continuous irregular type ._ interestingly , our studies show that there is a definite correspondence between the model acoustic energy and the nature of peel front patterns even though is the spatial average of the local strain rate . _ despite this , two distinguishable time scales in can be detected , one corresponding to short term fluctuations and another corresponding to overall periodic component .the short term fluctuations can be readily identified when the model acoustic signal is fluctuating without any background component ( see for example fig .[ i2m1v1gup0_1](d ) ) .these rapid changes in arise due fast dynamic changes in the sp configurations .the minimum in corresponds to the situation where the average velocity jumps of the sp configurations are smaller compared to that at the preceding maximum . in contrast , the overall periodicity in ( for instance see fig . [ i2m1v2gup1](c ) among many other cases ) can be identified with the changes in the peel front patterns that occur over a cycle in the phase plot .the minima in the corresponds to the peel patterns where more segments are in stuck state than in the peeled state while the maxima corresponds to more stuck and peeled segments ( see figs .[ i2m1v2gup1](a ) , ( b ) ) .the corresponding phase plot usually goes through a cycle of visits between the low and high velocity branches . often , however , the nature of the model acoustic energy signal can be complicated as in the case of low tape mass ( ) . even in such cases ,some insight is possible .this is aided by the analysis of the corresponding phase plot .for example for the low case where the roller inertia plays an important role in the dynamics , the rapidly fluctuating acoustic energy has an overall triangular envelope [ fig .[ i2m3v1gup01](e ) ] . on an expanded scale ,the convex envelope consists of seven local peaks [ fig .[ i2m3v1gup01](f ) ] .each of these is generated when the various peel front segments make abrupt jumps between the two branches of the peel force function .note that the phase space orbit has seven loops in this case [ fig .[ i2m3v1gup01](d ) ] .the general identification of the minima in with patterns that have more stuck segments than peeled segments still holds .similarly , the maxima in usually correspond to the presence of large number of stuck - peeled configurations .the above correspondence between the model acoustic energy and the peel front patterns provides insight into the transition from burst to continuous type of ae seen in experiments as a similar transition from burst type to continuous type is also seen in the model acoustic energy ( for large ) . at low pull velocities , the peel front goes through a cycle of patterns where most segments of the peel front ( or the entire peel front ) spends substantial time in the stuck state switching to stuck - peeled configuration . as the duration of the sp configuration is short and velocity bursts are large, is of burst type [ fig . [exptae](c ) ] . with increasing pull velocity ,only dynamic stuck - peeled configurations are seen which in turn leads to continuous ae signals [ figures .[ i5m3v2gu0_1_01](a , b ) ] .this coupled with time series analysis of the model acoustic signal shows that the associated positive lyapunov exponent decreases with increase in the traction velocity .this is precisely the trend observed for experimental signals as well .thus , the decreasing trend of the largest lyapunov exponent can be attributed to the peel front breaking up into large number of small segments providing insight into stick - slip dynamics and its connection to the ae process .the present study has relevance to the general area of stick - slip dynamics .as mentioned earlier , models for stick - slip dynamics use negative force - drive rate relation . in such models, the phase space orbit generally sticks to the slow manifold ( stable branches ) of the force - drive rate function .this leads to clearly identifiable stick and slip phases , the former lasting much longer than the latter .however , recent work on imaging the peel point dynamics shows that the ratio of the stick phase to the slip phase , is about two or even less than unity for high peel velocities . while all the known models of the peel process predict that the duration of the stick phase is longer than that of the slip phase , our model displays the experimentally observed feature .this feature emerges in the model due to the interplay of the three time scales aided by incomplete relaxation of the relevant modes .our studies show that _ only _ for low pull velocity and high do we observe the stick phase lasting much longer than the slip phase ._ as the pull velocity is increased , and for all other parameter values , we find that the duration of the slip phase _( peel velocity being larger than unity ) _ is nearly the same as or less than that of the stick phase _( peel velocity less than unity ) .further , the present model provides an example of the richness of spatiotemporal dynamics arising when more than two time scales are involved . in this context , we emphasize that the introduction of the rayleigh dissipation functional to model the acoustic energy is crucial for the richness of the spatiotemporal peel front patterns .it is important to note that this kind of dissipative term is specific to spatially extended systems as it represents relaxation of neighboring points on the peel front .the present study has relevance to time dependent issues of adhesion .for instance , apart from the fact that the time series analysis addresses the general problem of extracting dynamical information from noisy ae signals , it may have relevance to failure of adhesive joints and composites that are subject to fluctuating loads .the failure time can be estimated by calculating the lyapunov spectrum for the ae signals .if the largest lyapunov exponent is positive , the inverse of the exponent should give an estimate of the time scale over which the failure can occur and hence could prove to be useful in predicting failure of joints .the present study should also help to optimize production schedules in peeling tapes .finally , several features of the present study are common to the plc effect even though the underlying mechanism is very different . in this casethe repeated occurrence of stress drops during constant strain rate deformation , are associated with the formation and possible propagation of dislocation bands that are visible to the naked eye .the phenomenon occurs only in a window of applied strain rates .the instability is attributed to the pinning and unpinning of dislocations from solute atmosphere , yet , the dominant feature underlying the instability is the negative strain rate sensitivity of the flow stress that has two stable branches separated by an unstable branch . clearly , these features are similar to the occurrence of the peel instability within a window of pull velocities and the existence of unstable branch in the peel force function .further , the ananthakrishna ( ak ) model for the plc instability predicts that the stress drops should be chaotic in a subinterval of the instability domain .this prediction has been verified subsequently through the analysis of experimental stress - strain curves obtained from single and polycrystals .this feature is again similar to the existence of chaotic dynamics observed in a mid range of pull velocities in the peeling problem , both in experiment and in the model . in the case of the plc effect, one finds that the positive lyapunov exponent characterizing the stress - time series decreases toward the end of chaotic window , both in experiments and in the ak model . againthis feature is also seen in the present peel model as also in experimental ae signals .dynamically , in the case of the ak model for the plc effect the decreasing trend of the positive lyapunov exponent has been shown to be a result of a forward hopf bifurcation ( hb ) followed by a reverse hb . in the case of peeling problem as well ,the instability begins with a forward hb followed by a reverse hb . finally , in the plc effect ( both in experiments and the ak model ) , as in the peeling problem, the duration of the slip phase can be longer than that of the stick phase with increasing drive rate .as many of these features are common to two different systems , it is likely that these are general features in other stick - slip situations with multiple time scales that are limited to a window of drive rates with multiple participating time scales .a few comments may be in order about the model , in particular about the parameters that are crucial for the dynamics . while the agreement of several statistical and dynamical features of the model ( for several sets of parameter values ) with the experimental ae series is encouraging , it would be interesting to verify model results for other sets of parameters .for instance , it is clear that the roller inertia and the inertia of the tape mass are experimentally assessable parameters .thus , the influence of these two inertial time scales can in principal be studied in experiments . however, conventional experiments have been performed keeping these parameters fixed , presumably , as there has been no suggestion that the dynamics can be sensitive to these variables .it would be interesting to verify the predicted dynamical changes in the ae signals as a function of these two parameters .as for the influence of the dissipation parameter , the range of physically reasonable values of is expected to be small ( to ) as argued .interestingly , the region of low is indeed the region where both statistics and dynamical features compare well with that of the experiments .however , within the scope of the model , the visco - elastic properties of the adhesive have been modeled using an effective spring constant .( this kind of assumption is common to studies in adhesion and tackiness etc . )however , it is possible to include this feature as well .finally , it must be stated that features that critically depend on thickness of the film and its visco - elastic properties such the shape of peel front are beyond the scope of the present model .
we report a comprehensive investigation of a model for peeling of an adhesive tape along with a nonlinear time series analysis of experimental acoustic emission signals in an effort to understand the origin of intermittent peeling of an adhesive tape and its connection to acoustic emission . the model represents the acoustic energy dissipated in terms of rayleigh dissipation functional that depends on the local strain rate . we show that the nature of the peel front exhibits rich spatiotemporal patterns ranging from smooth , rugged and stuck - peeled configurations that depend on three parameters , namely , the ratio of inertial time scale of the tape mass to that of the roller , the dissipation coefficient and the pull velocity . the stuck - peeled configurations are reminiscent of fibrillar peel front patterns observed in experiments . we show that while the intermittent peeling is controlled by the peel force function , the model acoustic energy dissipated depends on the nature of the peel front and its dynamical evolution . even though the acoustic energy is a fully dynamical quantity , it can be quite noisy for a certain set of parameter values suggesting the deterministic origin of acoustic emission in experiments . to verify this suggestion , we have carried out a dynamical analysis of experimental acoustic emission time series for a wide range of traction velocities . our analysis shows an unambiguous presence of chaotic dynamics within a subinterval of pull speeds within the intermittent regime . time series analysis of the model acoustic energy signals is also found to be chaotic within a subinterval of pull speeds . further , the model provides insight into several statistical and dynamical features of the experimental ae signals including the transition from burst type acoustic emission to continuous type with increasing pull velocity and the connection between acoustic emission and stick - slip dynamics . finally , the model also offers an explanation for the recently observed feature that the duration of the slip phase can be less than that of the stick phase .
along with the boom in cloud computing , an increasing number of commercial providers have started to offer public cloud services . sincedifferent commercial cloud services may be supplied with different terminologies , definitions , and goals , performance evaluation of those services would be crucial and beneficial for both service customers ( e.g. cost - benefit analysis ) and providers ( e.g. direction of improvement ) . before implementing performance evaluation , a proper set of experiments must be designed , while the relevant factors that may influence performance play a prerequisite role in designing evaluation experiments . in general , one experiment should take into account more than one factor related to both the service to be evaluated and the workload .after exploring the existing studies of cloud services performance evaluation , however , we found that there was a lack of systematic approaches to factor selection for experimental design . in most cases , evaluators identified factors either randomly or intuitively , and thus prepared evaluation experiments through an ad hoc way .for example , when it comes to the performance evaluation of amazon ec2 , different studies casually considered different ec2 instance factors in different experiments , such as vm type , number , geographical location , operation system ( os ) brand , and even cpu architecture and brand , etc . in fact , to the best of our knowledge , none of the current cloud performance evaluation studies has used experimental factors " deliberately to design evaluation experiments and analyze the experimental results .therefore , we decided to establish a framework of suitable experimental factors to facilitate applying experimental design techniques to the cloud services evaluation work . unfortunately , it is difficult to directly point out a full scope of experimental factors for evaluating performance of cloud services , because the cloud nowadays is still chaotic compared with traditional computing systems .consequently , we used a regression manner to construct this factor framework .in other words , we tried to isolate the de facto experimental factors from the state - of - the - practice of cloud services performance evaluation .in fact , the establishment of this factor framework is a continuation of our previous work that collected , clarified and rationalized the key concepts and their relationships in the existing cloud performance evaluation studies .benefitting from such a de facto factor framework , new evaluators can explore and refer to the existing evaluation concerns for designing their own experiments for performance evaluation of commercial cloud services .note that , as a continuation of our previous work , this study conventionally employed four constrains , as listed below .we focused on the evaluation of only commercial cloud services , rather than that of private or academic cloud services , to make our effort closer to industry s needs .we only investigated performance evaluation of commercial cloud services .the main reason is that not enough data of evaluating the other service features could be found to support the generalization work .for example , there are little empirical studies in security evaluation of commercial cloud services due to the lack of quantitative metrics .we considered infrastructure as a service ( iaas ) and platform as a service ( paas ) without considering software as a service ( saas ) . since saas with special functionalitiesis not used to further build individual business applications , the evaluation of various saas instances could comprise an infinite and exclusive set of factors that would be out of the scope of this investigation .we only explored empirical evaluation practices in academic publications .there is no doubt that informal descriptions of cloud services evaluation in blogs and technical websites can also provide highly relevant information .however , on the one hand , it is impossible to explore and collect useful data from different study sources all at once . on the other hand, the published evaluation reports can be viewed as typical and peer - reviewed representatives of the existing ad hoc evaluation practices .the remainder of this paper is organized as follows .section [ ii ] briefly introduces the four - step methodology that we have used to establish this factor framework .section [ iii ] specifies the tree - structured factor framework branch by branch .an application case is employed in section [ iv ] to demonstrate how the proposed factor framework can help facilitate experimental design for cloud services performance evaluation .conclusions and some future work are discussed in section [ v ] .as previously mentioned , this factor framework is established based on our previous work , which is mainly composed of four steps , as listed below and respectively specified in the following subsections : conduct a systematic literature review ( slr ) .construct a taxonomy based on the slr .build a conceptual model based on the taxonomy .establish an experimental factor framework at last .the foundation for establishing this factor framework is a systematic literature review ( slr ) on evaluating commercial cloud services . as the main methodology applied for evidence - based software engineering ( ebse ) ,slr has been widely accepted as a standard and systematic approach to investigation of specific research questions by identifying , assessing , and analyzing published primary studies . following a rigorous selection process in this slr ,as illustrated in figure [ fig > picsequencediagram ] , we have identified 46 cloud services evaluation studies covering six commercial cloud providers , such as amazon , gogrid , google , ibm , microsoft , and rackspace , from a set of popular digital publication databases ( all the identified evaluation studies have been listed online for reference : http://www.mendeley.com/groups/1104801/slr4cloud/papers/ ) .the evaluation experiments in those identified 46 studies were thoroughly analyzed .in particular , the atomic experimental components , such as evaluation requirements , cloud service features , metrics , benchmarks , experimental resources , and experimental operations , were respectively extracted and arranged . during the analysis of these identified evaluation studies, we found that there were frequent reporting issues ranging from non - standardized specifications to misleading explanations . considering that those issues would inevitably obstruct comprehending and spoil drawing lessons from the existing evaluation work, we created a novel taxonomy to clarify and arrange the key concepts and terminology for cloud services performance evaluation .the taxonomy is constructed along two dimensions : performance feature and experiment . moreover , the performance feature dimension is further split into _ physical property _ and _ capacity _ parts , while the experiment dimension is split into _ environmental scene _ and _ operational scene _ parts , as shown in figure [ fig > pictaxonomy ] .the details of this taxonomy has been elaborated in .since a model is an abstract summary of some concrete object or activity in reality , the identification of real and concrete objects / activities plays a fundamental role in the corresponding modeling work .given that the taxonomy had capsuled relevant key concepts and terminology , we further built a conceptual model of performance evaluation of commercial cloud services to rationalize different abstract - level classifiers and their relationships .in detail , we used a three - layer structure to host different abstract elements for the performance evaluation conceptual model . to save space , here we only portray the most generalized part hosted in the top classifier layer , as shown in figure [ fig > picevaluationmodel ] , which reflects the most generic reality of performance evaluation of a computing paradigm : essentially , performance evaluation can be considered as _ exploring the capacity of particular computing resources with particular workloads driven by a set of operations_. in fact , the specific classifiers in the abovementioned conceptual model has implied the state - of - the - practice of performance evaluation factors that people currently took into account in the cloud computing domain . according to different positions in the process of an evaluation experiment , the specific classifiers of workload and computing resource indicate input process factors ; the specific classifiers of capacity suggest output process factors ; while the operation classifiers are used to adjust values of input process factors .the detailed experimental factors for cloud services performance evaluation are elaborated in the next section .as mentioned previously , the experimental factors for performance evaluation of commercial cloud services can be categorized into two input process groups ( workload and computing resource ) and one output process group ( capacity ) .thus , we naturally portrayed the factor framework as a tree with three branches .each of the following subsections describes one branch of the factor tree .based on our previous work , we found that a piece of workload used in performance evaluation could be described through one of three different concerns or a combination of them , namely terminal , activity , and object .as such , we can adjust the workload by varying any of the concerns through different experimental operations .the individual workload factors are listed in figure [ fig > picworkloadtree ] .in contrast with services to be evaluated in the cloud , clients and particular cloud resource ( usually vm instances ) issuing workload activities can be viewed as terminals . correspondingly , the _ geographical location _ or _ number _ of both clients and vm instances have been used to depict the relevant workload .meanwhile , the _ terminal type _ can also be used as a workload factor .for example , the authors evaluated cloud network latency by using client and ec2 instance respectively to issue pings . in this case, the _ terminal type _ has the equal essence to the factor _ communication scope _( cf . subsection [ iii > commun ] ) . the concept activity " here describes an inherent property of workload , which is different from , but adjustable by , experimental operations .for example , disk i / o request as a type of activity can be adjusted by operations like the number or time of the requests .in fact , the number- and time - related variables , such as _ activity duration _ , _ frequency _ , _ number _ , and _ timing _ , have been widely considered as workload factors in practice .furthermore , by taking a particular cloud resource being evaluated as a reference , the factor _ activity direction _ can be depicted as input or output . as for the _ activity sequence_ in a workload , the arrangement generates either sequential or parallel activity flows . in a workload for cloud services performance evaluation , objects refer to the targets of the abovementioned activities .the concrete objects can be individual messages , data files , and transactional jobs / tasks in fine grain , while they can also be coarse - grained workflows or problems .therefore , the _ object number _ and _ object size / complexity _ are two typical workload factors in the existing evaluation studies .note that we do not consider object location as a workload factor , because the locations of objects are usually hosted and determined by computing resources ( cf .subsection [ iii > resources ] ) .in particular , a workload may have multiple object size / complexity - related factors in one experiment .for example , a set of parameters of hpl benchmark , such as the block size and process grid size , should be tuned simultaneously when evaluating amazon ec2 . according to the physical properties in the performance feature of commercial cloud services , the cloud computing resource can be consumed by one or more of four basic styles : communication , computation , memory ( cache ) , and storage .in particular , the vm instance resource is an integration of all the four basic types of computing resources .overall , the computing resource factors can be organized as figure [ fig > picresourcetree ] shows . as explained in , communication becomes a special cloud computing resource because commercial cloud services are employed inevitably through internet / ethernet .as such , the _ethernet i / o index _ is usually pre - supplied as a service - level agreement ( sla ) by service providers . in practice ,the scope and level of communication have been frequently emphasized in the performance evaluation studies .therefore , we can summarize two practical factors : the factor _ communication scope _ considers intra - cloud and wide - area data transferring respectively , while the _ communication level _ distinguishes between ip - level and mpi - message - level networking . when evaluating paas , the computation resource is usually regarded as a black box . whereas , for iaas , the practices of computation evaluation of cloud services have taken into account _core number _ , _ elastic compute unit ( ecu ) number _ , _ thread number _ , and a set of cpu characteristics .note that , compared to physical cpu core and thread , ecu is a logical concept introduced by amazon , which is defined as the cpu power of a 1.0 - 1.2 ghz 2007 opteron or xeon processor .when it comes to cpu characteristics , the _ architecture _( e.g. 32 bit vs. 64 bit ) and _ brand _ ( e.g. amd opteron vs. intel xeon ) have been respectively considered in evaluation experiments .processors with the same brand can be further distinguished between different _ cpu models _ ( e.g. intel xeon e5430 vs. intel xeon x5550 ) .in particular , _ cpu frequency _ appears also as an sla of cloud computation resources . since memory / cache could closely work with the computation and storage resources in computing jobs ,it is hard to exactly distinguish the affect to performance brought by memory / cache .therefore , not many dedicated cloud memory / cache evaluation studies can be found from the literature .in addition to the sla _ memory size _ , interestingly , _ physical location _ and _ size _ of cache ( e.g. l1=64 kb vs. l2=1 mb in amazon m1 .* instances ) have attracted attentions when analyzing the memory hierarchy . however , in , different values of these factors were actually revealed by performance evaluation rather than used for experimental design . as mentioned in , storage can be either the only functionality or a component functionality of a cloud service , for example amazon s3 vs. ec2 .therefore , it can be often seen that disk - related storage evaluation also adopted experimental factors of evaluating other relevant resources like vm instances ( cf . subsection [ iii - vm ] ) .similarly , the predefined _ storage size _ acts as an sla , while a dedicated factor of evaluating storage is the _ geographical location_. different geographical locations of storage resources can result either from different service data centers ( e.g. s3 vs. s3-europe ) or from different storing mechanisms ( e.g. local disk vs. remote nfs drive ) .in addition , although not all of the public cloud providers specified the definitions , the storage resource has been distinguished among three types of offers : blob , table and queue .note that different _ storage types _ correspond to different sets of data - access activities , as described in .vm instance is one of the most popular computing resource styles in the commercial cloud service market .the widely considered factors in current vm instance evaluation experiments are _ geographical location _ , _ instance number _ , and _ vm type _ .the _ vm type _ of a particular instance naturally reflects its corresponding provider , as demonstrated in .moreover , although not common , the _ os brand _ ( e.g. linux vs. windows ) and _ physical location _ also emerged as experimental factors in some evaluation studies .note that the physical location of a vm instance indicates the instance s un - virtualized environment , which is not controllable by evaluators in evaluation experiments .in particular , recall that a vm instance integrates above four basic types of computing resources .we can therefore find that some factors of evaluating previous resources were also used in the evaluation of vm instances , for example the _ cpu architecture _ and _ core number _ .as discussed about the generic reality of performance evaluation in subsection [ ii > model ] , it is clear that the capacities of a cloud computing resource are intangible until they are measured .meanwhile , the measurement has to be realized by using measurable and quantitative metrics .therefore , we can treat the values of relevant metrics as tangible representations of the evaluated capacities .moreover , a particular capacity of a commercial cloud service may be reflected by a set of relevant metrics , and each metric provides a different lens into the capacity as a whole .for example , benchmark transactional job delay and benchmark delay are both latency metrics : the former is from the individual perspective , while the latter from the global perspective .as such , we further regard relevant metrics as possible output process factors when measuring a particular cloud service capacity , and every single output process factor can be used as a candidate response in the experimental design .since we have clarified seven different cloud service capacities , i.e. data throughput , latency , transaction speed , availability , reliability , scalability , and variability , the possible capacity factors ( metrics ) can be correspondingly categorized as figure [ fig > piccapacitytree ] shows . due to the limit of space , it is impossible and unnecessary to exhaustively list all the metrics in this paper .in fact , the de facto metrics for performance evaluation of commercial cloud services have been collected and summarized in our previous work .since the factor framework is inherited from the aforementioned taxonomy and modeling work , it can also be used for , and in turn be validated by , analyzing the existing studies of cloud services performance evaluation , as described in . to avoid duplication, we do not elaborate the analysis application scenario , and the corresponding validation , of the factor framework in this paper .instead , we particularly highlight and demonstrate how this factor framework can help facilitate designing experiments for evaluating performance of commercial cloud services .suppose there is a requirement of evaluating amazon ec2 with respect to its disk i / o .recall that relevant factors play a prerequisite role in designing evaluation experiments .given the factor framework proposed in this paper , we can quickly and conveniently lookup and choose experimental factors according to the evaluation requirement . to simplify the demonstration , herewe constrain the terminal to be clients , while only consider the direction of disk i / o and data size to be read / write in workload factors , and only consider the ec2 vm type in computing resource factors .as for the capacity factors , we can employ multiple suitable metrics in this evaluation , for example disk i / o latency and data throughput .however , since only one metric should be determined as the response in an experimental design , we choose the disk data throughput in this case .thus , we have circled _ active direction _ , _ object size _ , and _ vm type _ as factors , while _ data throughput _ as response in the framework for designing experiments . in particular, we use two - level settings for the three factors : the value of _ active direction _ can be write or read ; _ object size _ can be char or block ; and _ vm type _ only covers m1.small and m1.large .in addition , we use mb / s " as the unit of _ data throughput_. since only a small amount of factors are concerned , we can simply adopt the most straightforward design technique , namely full - factorial design , for this demonstration .this design technique adjusts one factor at a time , which results in an experimental matrix comprising eight trials , as shown in matrix ( [ eq>1 ] ) . for conciseness, we further assign aliases to those experimental factors , as listed below .note that the sequence of the experimental trials has been randomized to reduce possible noises or biases during the designing process . a : activity direction ( write vs. read ) .b : object size ( char vs. block ) .c : vm type ( m1.small vs. m1.large ) .response : data throughput ( mb / s ) .\ ] ] following the experimental matrix , we can implement evaluation experiments trial by trial , and fill the response column with experimental results .for our convenience , here we directly employ the evaluation results reported in , as listed in matrix ( [ eq>2 ] ) .\ ] ] finally , different analytical techniques can be employed to reveal more comprehensive meanings of experimental results for commercial cloud services .for example , in this case , we can further investigate the significances of these factors to analyze their different influences on the disk i / o performance . in detail , by setting the significance level as 0.05 , we draw a pareto plot to detect the factor and interaction effects that are important to the process of reading / writing data from / to ec2 disks , as shown in figure [ fig > picplot ] . given a particular significance level , pareto plot displays a red reference line besides the effect values . any effect that extends pastthe reference line is potentially important . in figure [ fig > picplot ] ,none of the factor or interaction effects is beyond the reference line , which implies that none of the factors or interactions significantly influences the ec2 disk i / o performance .therefore , we can claim that ec2 disk i / o is statistically stable with respect to those three factors .however , factor b ( data size to be read / written ) has relatively significant influence on the performance of ec2 disk i / o . since the throughput of small - size data ( char ) is much lower than that of large - size data ( block ), we can conclude that there is a bottleneck of transaction overhead when reading / writing small size of data . on the contrary, there is little i / o performance effect when switching activity directions , which means the disk i / o of ec2 is particularly stable no matter reading or writing the same size of data .overall , through the demonstration , we can find that this factor framework offers a concrete and rational foundation for implementing performance evaluation of commercial cloud services .when evaluating cloud services , there is no doubt that the techniques of experimental design and analysis can still be applied by using intuitively selected factors . nevertheless , by referring to the existing evaluation experiences , evaluatorscan conveniently identify suitable experimental factors while excluding the others , which essentially suggests a systematic rather than ad hoc decision making process .cloud computing has attracted tremendous amount of attention from both customers and providers in the current computing industry , which leads to a competitive market of commercial cloud services . as a result, different cloud infrastructures and services may be offered with different terminology , definitions , and goals .on one hand , different cloud providers have their own idiosyncratic characteristics when developing services .on the other hand , even the same provider can supply different cloud services with comparable functionalities for different purposes .for example , amazon has provided several options of storage service , such as ec2 , ebs , and s3 .consequently , performance evaluation of candidate services would be crucial and beneficial for many purposes ranging from cost - benefit analysis to service improvement .when it comes to performance evaluation of a computing system , proper experiments should be designed with respect to a set of factors that may influence the system s performance . in the cloud computing domain ,however , we could not find any performance evaluation study intentionally concerning factors " for experimental design and analysis . on the contrary ,most of the evaluators intuitively employed experimental factors and prepared ad hoc experiments for evaluating performance of commercial cloud services . considering factor identification plays a prerequisite role in experimental design , it is worthwhile and necessary to investigate the territory of experimental factors to facilitate evaluating cloud services more systematically .therefore , based on our previous work , we collected experimental factors that people currently took into account in cloud services performance evaluation , and arrange them into a tree - structured framework .the most significant contribution of this work is that the framework supplies a dictionary - like approach to selecting experimental factors for cloud services performance evaluation .benefitting from the framework , evaluators can identify necessary factors in a concrete space instead of on the fly . in detail , as demonstrated in the ec2 disk i / o evaluation case in section [ iv ] , given a particular evaluation requirement , we can quickly and conveniently lookup and circle relevant factors in the proposed framework to design evaluation experiments , and further analyze the effects of the factors and their interactions to reveal more of the essential nature of the evaluated service .note that this factor framework is supposed to supplement , but not replace , the expert judgement for experimental factor identification , which would be particularly helpful for cloud services evaluation when there is a lack of a bunch of experts .the future work of this research will be unfolded along two directions .first , we will gradually collect feedback from external experts to supplement this factor framework .as explained previously , cloud computing is still maturing and relatively chaotic , it is therefore impossible to exhaustively identify the relevant experimental factors all at once . through smooth expansion , we can make this factor framework increasingly suit the more general area of evaluation of cloud computing .second , given the currently available factors , we plan to formally introduce and adapt suitable techniques of experimental design and analysis to evaluating commercial cloud services . with experimental design and analysis techniques ,this factor framework essentially acts as a solid base to support systematic implementations of cloud services evaluation .this project is supported by the commonwealth of australia under the australia - china science and research fund .nicta is funded by the australian government as represented by the department of broadband , communications and the digital economy and the australian research council through the ict centre of excellence program . c. baun and m. kunze , performance measurement of a private cloud in the opencirrus^tm^ testbed , " _ proc .4th workshop on virtualization in high - performance cloud computing ( vhpc 2009 ) in conjunction with 15th int .european conf .parallel and distributed computing ( euro - par 2009 ) _ , springer - verlag , aug .2009 , pp .434443 . c. binnig , d. kossmann , t. kraska , and s. loesing , how is the weather tomorrow ? towards a benchmark for the cloud , " _ proc .2nd int .workshop on testing database systems ( dbtest 2009 ) in conjunction with acm sigmod / podps int .management of data ( sigmod / pods 2009 ) _ , acm press , jun .2009 , pp .d. chiu and g. agrawal , evaluating caching and storage options on the amazon web services cloud , " _ proc .11th acm / ieee int .grid computing ( grid 2010 ) _ , ieee computer society , oct .2010 , pp .e. deelman , g. singh , m. livny , b. berriman , and j. good , the cost of doing science on the cloud : the montage example , " _ proc .2008 int .high performance computing , networking , storage and analysis ( sc 2008 ) _ , ieee computer society , nov .2008 , pp . 112 .j. dejun , g. pierre , and c .- h .chi , ec2 performance analysis for resource provisioning of service - oriented applications , " _ proc .2009 int .conf . service - oriented computing ( icsoc / servicewave 2009 ) _ , springer - verlag , nov .2009 , pp .197207 .q. he , s. zhou , b. kobler , d. duffy , and t. mcglynn , case study for running hpc applications in public clouds , " _ proc .19th acm int .symp . high performance distributed computing ( hpdc 2010 ) _ , acm press , jun .2010 , pp .395401 .z. hill and m. humphrey , a quantitative analysis of high performance computing with amazon s ec2 infrastructure : the death of the local cluster ? , " _ proc .10th ieee / acm int .conf . grid computing ( grid 2009 ) _ , ieee computer society , oct .2009 , pp .. z. hill , j. li , m. mao , a. ruiz - alvarez , and m. humphrey , early observations on the performance of windows azure , " _ proc .19th acm int .symp . high performance distributed computing ( hpdc 2010 ) _ , acm press , jun .2010 , pp .367376 .a. iosup , s. ostermann , n. yigitbasi , r. prodan , t. fahringer , and d.h.j .epema , performance analysis of cloud computing services for many - tasks scientific computing , " _ ieee trans .parallel distrib ._ , vol . 22 , no . 6 , jun . 2011 , pp. 931945 . r.k .jain , _ the art of computer systems performance analysis : techniques for experimental design , measurement , simulation , and modeling_. new york , ny : wiley computer publishing , john wiley & sons , inc ., may 1991 .g. juve , e. deelman , k. vahi , g. mehta , b. berriman , b.p .berman , and p. maechling , scientific workflow applications on amazon ec2 , " _ proc .workshop on cloud - based services and applications in conjunction with the 5th ieee int .e - science ( e - science 2009 ) _ , ieee computer society , dec . 2009 ,5966 .z. li , l. obrien , r. cai , and h. zhang , towards a taxonomy of performance evaluation of commercial cloud services , " _ proc .5th int .cloud computing ( ieee cloud 2012 ) _ , ieee computer society , jun . 1012 , pp .344351 .z. li , l. obrien , h. zhang , and r. cai , on a catalogue of metrics for evaluating commercial cloud services , " _ proc .13th acm / ieee int .grid computing ( grid 2012 ) _ , ieee computer society , sept .2012 , pp .164173 .a. luckow and s. jha , abstractions for loosely - coupled and ensemble - based simulations on azure , " _ proc .2nd ieee int .conf . cloud computing technology and science ( cloudcom 2010 )_ , ieee computer society , nov./dec .2010 , pp .550556 .j. napper and p. bientinesi , can cloud computing reach the top500 ? , " _ proc .combined workshops on unconventional high performance computing workshop plus memory access workshop ( uchpc - maw 2009 ) _ , acm press , may 2009 , pp . 1720 .s. ostermann , a. iosup , n. yigitbasi , r. prodan , t. fahringer , and d.h.j .epema , a performance analysis of ec2 cloud computing services for scientific computing , " _ proc .1st int .cloud computing ( cloudcomp 2009 ) _ , springer - verlag , oct .2009 , pp .115131 .palankar , a. iamnitchi , m. ripeanu , and s. garfinkel , amazon s3 for science grids : a viable solution ? , " _ proc .2008 int .workshop on data - aware distributed computing ( dadc 2008 ) _ , acm press , jun .2008 , pp .r. prodan and s. ostermann , a survey and taxonomy of infrastructure as a service and web hosting cloud providers , " _ proc .10th ieee / acm int .conf . grid computing ( grid 2009 ) _ , ieee computer society , oct .2009 , pp .w. sobel , s. subramanyam , a. sucharitakul , j. nguyen , h. wong , a. klepchukov , s. patil , a. fox , and d. patterson , cloudstone : multi - platform , multi - language benchmark and measurement tools for web 2.0 , " _ proc .1st workshop on cloud computing and its applications ( cca 2008 ) _ , oct .2008 , pp .v. stantchev , performance evaluation of cloud computing offerings , " _ proc .3rd int .advanced engineering computing and applications in sciences ( advcomp 2009 ) _ , ieee computer society , oct .2009 , pp .
given the diversity of commercial cloud services , performance evaluations of candidate services would be crucial and beneficial for both service customers ( e.g. cost - benefit analysis ) and providers ( e.g. direction of service improvement ) . before an evaluation implementation , the selection of suitable factors ( also called parameters or variables ) plays a prerequisite role in designing evaluation experiments . however , there seems a lack of systematic approaches to factor selection for cloud services performance evaluation . in other words , evaluators randomly and intuitively concerned experimental factors in most of the existing evaluation studies . based on our previous taxonomy and modeling work , this paper proposes a factor framework for experimental design for performance evaluation of commercial cloud services . this framework capsules the state - of - the - practice of performance evaluation factors that people currently take into account in the cloud computing domain , and in turn can help facilitate designing new experiments for evaluating cloud services . = 5 cloud computing ; commercial cloud services ; performance evaluation ; experimental design ; factor framework
let , and be three finite - dimensional real euclidean spaces each endowed with an inner product and its induced norm .let ] be two closed proper convex functions and and be two linear maps .consider the following 2-block separable convex optimization problem : where is the given data and the linear maps and are the adjoints of and , respectively .the effective domains of and are denoted by and , respectively .let be a given penalty parameter .the augmented lagrangian function of problem is defined by , for any , choose an initial point and a step - length .the classical alternating direction method of multipliers ( admm ) of glowinski and marroco and gabay and mercier then takes the following scheme for , { \displaystyle}z^{k+1}=\operatorname*{arg\,min}_{z}{{\mathcal l}}_{\sigma}(y^{k+1},z ; x^k ) , \\[2 mm ] { \displaystyle}x^{k+1}=x^k+\tau\sigma({{\mathcal a}}^*y^{k+1}+{{\mathcal b}}^*z^{k+1}-c ) . \end{array } \right.\ ] ] the convergence analysis for the admm scheme under certain settings was first conducted by gabay and mercier , glowinski and fortin and glowinski .one may refer to and for recent surveys on this topic and to for a note on the historical development of the admm . in a highly influential paper times captured by google scholar as of july 8 , 2015 .] written by boyd et al . , it was asserted [ section 3.2.1 , page 17 ] that if and are closed proper convex functions ( * ? ? ?* assumption 1 ) and the lagrangian function of problem ( [ primal ] ) has a saddle point ( * ? ? ?* assumption 2 ) , then the admm scheme converges for . this , however , turns to be false without imposing the prior condition that all the subproblems involved have solutions .to demonstrate our claim , in this note we shall provide a simple example ( see section [ sec : example ] ) with the following four nice properties : * both and are closed proper convex functions ; * the lagrangian function has infinitely many saddle points ; * the slater s constraint qualification ( cq ) holds ; and * the linear operator is nonsingular .note that our example to be constructed satisfies the two assumptions made in , i.e. , ( p1 ) and ( p2 ) , and the two additional favorable properties ( p3 ) and ( p4 ) .yet , the admm scheme even with may not be well - defined for solving problem. a closer examination of the proofs given in reveals that the authors mistakenly took for granted the existence of solutions to all the subproblems in ( [ admm ] ) under ( p1 ) and ( p2 ) only . herewe will fix this gap by presenting fairly mild conditions to guarantee the existence of solutions to all the subproblems in ( [ admm ] ) .moreover , in order to deal with the potentially non - solvability issue of the subproblems in the admm scheme , we shall analyze the convergence of the admm under a more useful semi - proximal admm ( spadmm ) setting advocated by fazel et al . , with a computationally more attractive large step - length that can even be bigger than the golden ratio of .let and be two self - adjoint positive semidefinite linear operators . then the spadmm takes the following iteration scheme for , z^{k+1}=\operatorname*{arg\,min}\limits_{z}\big\{{{\mathcal l}}_{\sigma}(y^{k+1},z;x^k)+\frac{1}{2}\|z - z^k\|^2_{{\mathcal t}}\big\ } , \\[3 mm ] x^{k+1}=x^k+\tau\sigma({{\mathcal a}}^*y^{k+1}+{{\mathcal b}}^*z^{k+1}-c ) .\end{array } \right.\ ] ] the spadmm scheme with and is nothing but the admm scheme and the case and was initiated by eckstein .most recent studies have shown that the spadmm , a seemingly mild extension of the classical admm , turns out to play a pivotal role in solving multi - block convex composite conic programming problems with a low to medium accuracy . for more details on choosing and , one may refer to the recent ph.d thesis of li .the remaining parts of this note are organized as follows . in section[ preliminary ] , we first present some necessary preliminary results from convex analysis for later discussions and then provide conditions under which the subproblems in the spadmm scheme are solvable , or even admit bounded solution sets , so that this scheme is well - defined . in section [ sec :example ] , based on several results established in section [ preliminary ] , we construct a counterexample that satisfies ( p1)(p4 ) to show that the conclusion on the convergence of admm scheme ( [ admm ] ) in ( * ? ? ?* section 3.2.1 ) can be false without making further assumptions . in section [ sec : converge ] , we establish some satisfactory convergence properties for the spadmm scheme with a computationally more attractive large step - length that can even exceed the golden ratio of , under fairly weak assumptions .we conclude this note in section [ sec : conclusion ] .let be a finite dimensional real euclidean space endowed with an inner product and its induced norm .let be any self - adjoint positive semidefinite linear operator .for any , define and so that for any given set , we denote its relative interior by and define its indicator function ] be a closed proper convex function .we use and to denote its effective domain and its epigraph , respectively .moreover , we use to denote the subdifferential mapping ( * ? ? ?* section 23 ) of , which is defined by it holds that there exists a self - adjoint positive semidefinite linear operator such that for any with and , since is closed , proper and convex , by ( * ? ? ?* theorem 8.5 ) we know that the recession function ( * ? ? ?* section 8) of , denoted by , is a positively homogeneous closed proper convex function that can be written as , for an arbitrary , the fenchel conjugate of is a closed proper convex function defined by since is closed , by ( * ? ? ?* theorem 23.5 ) we know that the dual of problem takes the form of the lagrangian function of problem is defined by which is convex in and concave in .recall that we say the slater s cq for problem ( [ primal ] ) holds if under the above slater s cq , from ( * ? ? ?* corollaries 28.2.2 & 28.3.1 ) we know that is a solution to problem if and only if there exists a lagrangian multiplier such that is a saddle point to the lagrangian function , or , equivalently , is a solution to the following karush - kuhn - tucker ( kkt ) system furthermore , if the solution set to the kkt system is nonempty , by ( * ? ? ?* theorem 30.4 & corollary 30.5.1 ) we know that a vector is a solution to if and only if is an optimal solution to problem and is an optimal solution to problem . in the following , we shall conduct discussions on the existence of solutions to the subproblems in the spadmm scheme .let the augmented lagrangian function be defined by ( [ alagrangian ] ) and and be two self - adjoint positive semi - definite linear operators used in the spadmm scheme ( [ spadmm ] ) .let be an arbitrarily given point .consider the following two auxiliary optimization problems : and note that since , problem is equivalent to we now study under what conditions problems and are solvable or have bounded solution sets .for this purpose , we consider the following assumptions : [ as-1 ] for any , where [ as-11 ] for any , where [ as-2 ] for any .[ as-22 ] for any .note that assumptions [ as-1]-[as-22 ] are not very restrictive .for example , if both and are coercive , in particular if they are norm functions , all the four assumptions hold automatically without any other conditions . under the above assumptions, we have the following results . [ theorem : well - defined ] it holds that * ( a ) * : : problem is solvable if assumption holds , and problem is solvable if assumption holds . * ( b ) * : : the solution set to problem is nonempty and bounded if and only if assumption holds , and the solution set to problem is nonempty and bounded if and only if assumption holds . * ( a ) * we first show that when assumption [ as-1 ] holds , the solution set to problem is not empty .consider the recession function of .on the one hand , by using ( * ? ? ?* theorem 9.3 ) and the second example given in ( * ? ? ?* pages 67 - 68 ) , we know that for any such that or , one must have . on the other hand , for any such that and , by the definition of in we have hence , by assumption [ as-1 ] we know that for all except for those satisfying .then , from ( * ? ? ?* ( b ) in corollary 13.3.4 ) , it holds that .furthermore , by ( * ? ? ?* theorem 23.4 ) we know that is a nonempty set , i.e. , there exists a such that . by noting that is closed and using , we then have , which implies that is the solution to problem hence to problem . by repeating the above discussions we know that problem is also solvable if assumption [ as-11 ] holds .note that problem is equivalent to problem . by reorganizing the proofs for part ( a ), we can see that assumption [ as-2 ] holds if and only if for all . as a result , if assumption [ as-2 ] holds , from ( * ? ? ?* theorem 27.2 ) we know that problem has a nonempty and bounded solution set .conversely , if the solution set to problem is nonempty and bounded , by ( * ? ? ?* corollary 8.7.1 ) we know that there does not exist any such that , so that assumption [ as-2 ] holds .similarly , we can prove the remaining results of part ( b ) .this completes the proof of the proposition .based on proposition [ theorem : well - defined ] and its proof , we have the following results .[ coro2 ] if problem has a nonempty and bounded solution set , then both problems and have nonempty and bounded solution sets .since problem has a nonempty and bounded solution set , there does not exist any with such that , or with such that .thus , assumptions [ as-2 ] and [ as-22 ] hold .then , by part ( b ) in proposition [ theorem : well - defined ] we know that the conclusion of corollary [ coro2 ] holds. [ corpolyhedral ] if ( or ) is a closed proper piecewise linear - quadratic convex function ( * ? ? ? * definition 10.20 ) , especially a polyhedral convex function , we can replace the `` '' in assumption by `` '' and the corresponding sufficient condition in part of proposition is also necessary . note that when is a closed piecewise linear - quadratic convex function , the function defined in is a piecewise linear - quadratic convex function with being a closed convex polyhedral set . then by ( * ? ? ?* theorem 11.14(b ) ) we know that is also a piecewise linear - quadratic convex function whose effective domain is a closed convex polyhedral set . by repeating the discussions for part ( a ) of proposition [ theorem : well - defined ] and using ( * ? ? ? * corollary 13.3.4 , ( a ) ) we can obtain that assumption [ as-1 ] with " being replaced by `` '' holds if and only if , or is a nonempty set ( * ? ? ?* proposition 10.21 ) , which is equivalent to the fact that is a nonempty set .if is piecewise linear - quadratic we can get a similar result . finally , we need the following easy - to - verify result on the convergence of quasi - fejr monotone sequences .[ lemma : sq - sum ] let be a nonnegative sequence of real numbers satisfying for all , where is a nonnegative and summable sequence of real numbers .then the quasi - fejr monotone sequence converges to a unique limit point .in this section , we shall provide an example that satisfies all the properties ( p1)-(p4 ) stated in section [ intro ] to show that the solution set to a certain subproblem in the admm scheme can be empty if no further assumptions on , or are made .this means that the convergence analysis for the admm stated in can be false .the construction of this example relies on proposition [ theorem : well - defined ] .the parameter and the initial point in the counterexample are just selected for the convenience of computations and one can construct similar examples for arbitrary penalty parameters and initial points .we now present this example , which is a 3-dimensional 2-block convex optimization problem . in this example , and closed proper convex functions with and .the vector lies in and satisfies the constraint in problem .hence , for problem , the slater cq holds .it is easy to check that the optimal solution set to problem is given by and the corresponding optimal objective value is .the lagrangian function of problem is given by we now compute the dual of problem based on this lagrangian function .[ dualobj ] the objective function of the dual of problem is given by 1-x , & \mbox{\rm if}\quad x\in[-2,-1 ) , \\[1 mm ] -2x , & \mbox{\rm if}\quad x\in[-1,0 ] , \\[1 mm ] -\infty , & \mbox{\rm if}\quad x\in(0+\infty ) .\end{array } \right.\ ] ] by the definition of the dual objective function , we have { \displaystyle}=\inf_{z\ge0,y_2}\big\{\inf_{y_1}(\max(e^{-y_1}+y_2,y_2 ^ 2)+(y_2-z-2)x)\big\ } \\[3 mm ] { \displaystyle}=\inf_{z\ge 0,y_2}\{\max(y_2,y_2 ^ 2)+y_2x - zx-2x\ } \\[3 mm ] { \displaystyle}=\min_{y_2 } \big ( \inf_{y_2\in[0,1],z\ge 0}\big\{y_2+y_2x - zx-2x\big\ } , \inf_{y_2\not\in[0,1],z\ge 0}\big\{y_2 ^ 2+y_2x - zx-2x\big\}\big ) . \end{array}\ ] ] for any given , we have ,z\ge 0}\big\{y_2+y_2x - zx-2x\big\ } \\[2 mm ] { \displaystyle}=\inf_{y_2\in[0,1]}\big\{y_2(1+x)\big\ } + \inf_{z\ge 0}\big\{-zx\big\ } -2x = \left\ { \begin{array}{ll } 1-x,\quad & \mbox{if}\quad x < -1 , \\[1 mm ] -2x , & \mbox{if}\quad x\in[-1,0 ] , \\[1 mm ] -\infty , & \mbox{if}\quad x > 0 .\end{array } \right .\end{array}\ ] ] moreover , for any , it holds that ,z\ge 0}\big\{y_2 ^ 2+y_2x - zx-2x\big\ } \\[3 mm ] \quad{\displaystyle}=\inf_{y_2\not\in[0,1]}\big\{y_2 ^ 2+y_2x+x^2/4-x^2/4 - 2x\big\}+\inf_{z\ge0}\big\{-zx\big\ } \\[3 mm ] \quad { \displaystyle}=\inf_{y_2\not\in[0,1]}\big\{(y_2+x/2)^2\big\}+\inf_{z\ge 0}\big\{-zx\big\}-x^2/4 - 2x \\[4 mm ] \quad = \left\ { \begin{array}{ll } -x^2/4 - 2x , \quad & \mbox{if}\quad x< -2 , \\[1 mm ] 1-x , & \mbox{if}\quad x\in[-2,-1 ] , \\[1 mm ] -2x , & \mbox{if}\quad x\in[-1,0 ] , \\[1 mm ] -\infty , & \mbox{if}\quad x > 0 .\end{array } \right .\end{array}\ ] ] then by combining the above discussions on the two cases we obtain the conclusion of this lemma .( left ) and the function ( right).,title="fig:",scaledwidth=49.5% ] ( left ) and the function ( right).,title="fig:",scaledwidth=49.5% ] by lemma [ dualobj ] , one can see that the optimal solution to the dual of problem is and the optimal value of the dual of problem is ( see fig . [ fig:1 ] ) .moreover , the set of solutions to the kkt system for problem is given by next , we consider solving problem by using the admm scheme . for convenience ,let and set the initial point .now , one should compute by solving define the function $ ] by & = { \displaystyle}\inf_{y_1}\big\ { \max\big(e^{-y_1}+y_2,y_2 ^ 2\big)+(y_2- 2)^2/2 \big\ } \\[2 mm ] & = \left\ { \begin{array}{ll } \frac{3}{2 } y_2 ^ 2 - 2y_2 + 2\quad & \mbox{if}\quad y_{2}\not\in[0,1 ] , \\[1 mm ] \frac{1}{2 } y^2_2-y_2 + 2 & \mbox{if}\quad y_{2}\in[0,1 ] .\end{array } \right .\end{array}\ ] ] by direct calculations we can see that the above infimum is attained at with ( see fig .[ fig:1 ] ) . however , we have for any , means that although is finite , it can not be attained at any .then the subproblem for computing is not solvable and hence the admm scheme ( [ admm ] ) is not well - defined .note that for problem , assumption [ as-1 ] fails to hold since the direction satisfies and but .the counterexample constructed here is very simple . yet, one may still ask if the objective function about in problem ( [ problem ] ) can be replaced by an even simpler quadratic function .actually , this is not possible as assumption [ as-1 ] holds if is a quadratic function and the original problem has a solution .specifically , suppose that is a given number , is a self - adjoint positive semidefinite linear operator and is a given vector while takes the following form from ( * ? ? ?* pages 67 - 68 ) we know that + \infty,&\quad \mbox{if}\quad{{\mathcal q}}y\neq 0 .\end{array } \right.\ ] ] if problem has a solution , one must have whenever .this , together with , clearly implies that assumption [ as-1 ] holds .the example presented in the previous section motivates us to consider the convergence of the spadmm scheme with a computationally more attractive large step - length .we re - emphasize that the spadmm scheme is a natural yet more useful extension of the admm scheme and all the results presented in this section are applicable for the ammm scheme . for convenience , we introduce some notations , which will be used throughout this section .we use and to denote the two self - adjoint positive semidefinite linear operators whose definitions , corresponding to the two functions and in problem , can be drawn from .let be a given vector , whose definition will be specified latter .we denote , and for any .if additionally the spadmm scheme generates an infinite sequence , for we denote , and , and define the following auxiliary notations -{{\mathcal s}}(y^{k}-y^{k-1 } ) , \\[2 mm ] \displaystyle v^{k}:=-{{\mathcal b}}[x^{k}+(1-\tau)\sigma({{\mathcal a}}^*y_e^{k}+{{\mathcal b}}^*z_e^{k})]-{{\mathcal t}}(z^{k}-z^{k-1 } ) , \\[2 mm ] \psi_k:= \frac{1}{\tau\sigma}\|x^k_e\|^2 + \|y_e^{k}\|_{{\mathcal s}}^2 + \|z_e^k\|^2_{{{\mathcal t}}+\sigma{{\mathcal b}}{{\mathcal b}}^ * } , \color{black } \\[2 mm ] \phi_k : = \psi_{k}+\|z^k - z^{k-1}\|_{{\mathcal t}}^2 + \max(1-\tau , 1-\tau^{-1})\sigma \|{{\mathcal a}}^*y_e^{k}+{{\mathcal b}}^*z_e^{k}\|^2 \end{array } \right.\ ] ] with the convention and . based on these notations , we have the following result .[ prop:2 ] suppose that is a solution to the kkt system , and that the spadmm scheme generates an infinite sequence is guaranteed to be true if assumptions and hold , cf .proposition .then , for any , & + \min(1,1-\tau+\tau^{-1})\sigma\|{{\mathcal a}}^*y_e^{k+1}+{{\mathcal b}}^*z_e^{k+1}\|^2 \\[2 mm ] & + \min(\tau,1+\tau-\tau^2)\sigma\|{{\mathcal b}}^*(z^{k+1}-z^k)\|^2 \end{array}\ ] ] and & \quad + ( 1-\tau)\sigma\|{{\mathcal a}}^*y^{k+1}_e+{{\mathcal b}}^*z^{k+1}_e\|^2 + \sigma\|{{\mathcal a}}^*y^{k+1}_e+{{\mathcal b}}^*z^{k}_e\|^{2}. \end{array}\ ] ] for any , the inclusions in directly follow from the first - order optimality condition of the subproblems in the spadmm scheme ( [ spadmm ] ) .the inequality has been proved in fazel et al .* parts ( a ) and ( b ) in theorem b.1 ) . meanwhile , by using ( b.12 ) in (* theorem b.1 ) and we can get -\frac{2-\tau}{2}\sigma\|{{\mathcal a}}^*y^{k+1}_e+{{\mathcal b}}^*z^{k+1}_e\|^2 + \sigma\langle { { \mathcal b}}^*(z^{k+1}-z^k ) , { { \mathcal a}}^*y_e^{k+1}+{{\mathcal b}}^*z_e^{k+1}\rangle \\[2 mm ] -\frac{1}{2}\|y_e^{k+1}\|_{{\mathcal s}}^2 + \frac{1}{2}\|y_e^{k}\|_{{\mathcal s}}^2 -\frac{1}{2}\|z_e^{k+1}\|_{{\mathcal t}}^2 + \frac{1}{2}\|z_e^{k}\|_{{\mathcal t}}^2 \\[2 mm ] \ge \|y_e^{k+1}\|^2_{\sigma_f } + \|z_e^{k+1}\|^2_{\sigma_g } + \frac{1}{2}\|y^{k+1}-y^k\|_{{\mathcal s}}^2 + \frac{1}{2}\|z^{k+1}-z^k\|_{{\mathcal t}}^2 , \end{array}\ ] ] which , together with the definition of in , implies .this completes the proof .now , we are ready to present several convergence properties of the spadmm scheme . [ theorem : c1 ] assume that the solution set to the kkt system for problem is nonempty .suppose that the spadmm scheme generates an infinite sequence , which is guaranteed to be true if assumptions and hold .then , if one has the following results : ( a ) : : the sequence converges to an optimal solution to the dual problem , and the primal objective function value sequence converges to the optimal value ; ( b ) : : the sequences and are bounded , and if assumptions and hold , the sequence and are also bounded ; ( c ) : : any accumulation point of the sequence is a solution to the kkt system , and if is one of its accumulation point , , , and as ; ( d ) : : if and , then each of the subproblems in the spadmm scheme has a unique optimal solution and the whole sequence converges to a solution to the kkt system .let be an arbitrary solution to the kkt system of problem .we first establish some basic results and then prove ( a ) to ( d ) one by one . in the following ,the notations provided at the beginning of this section are used . note that for any . then , if , by using and we obtain that the sequences are all bounded , and if but , by using the equality that we know .therefore , by using and we know that the sequences in are all bounded .moreover , it holds that which , together with , implies that and hold . to sum up, we have shown that when holds , the sequences in are bounded and and hold .this , consequently , implies that and are bounded . in the following ,we prove ( a ) to ( d ) separately . *( a ) * since is a bounded sequence , for any one of its accumulation points , e.g. , it admits a subsequence , say , , such that . by taking limits in the first two equalities of along with for and using and, we obtain that from and we know that for any , and .hence , we can get and so that then , by using , , , and the outer semi - continuity of subdifferential mappings of closed proper convex functions we know that this implies that is a solution to the dual problem . therefore , we can conclude that any accumulation of is a solution to the dual problem .to finish the proof of part ( a ) , we need to show that is a convergent sequence .this will be done in the following .we first consider the case that .define the sequence by from in proposition [ prop:2 ] and the fact that , we know that is a nonnegative and bounded sequence .thus , there exists a subsequence of , say , such that .since is bounded , it must has a convergent subsequence , say , , such that exists .note that is a solution to the kkt system .therefore , without loss of generality , we can reset from now on . by using in proposition [ prop:2 ] we know the nonnegative sequence is monotonically nonincreasing , and since , we have which indicates that is a convergent sequence .second , we need to consider the case that .define the nonnegative sequence by from we known that which , together with , lemma [ lemma : sq - sum ] and the fact that , implies that is a convergent sequence . as a result , by the definition of we know the sequence is nonnegative and bounded . then by choosing proper subsequences of and and repeating the previous analysis for getting and with and being replaced by and , we can establish that and .hence , is also a convergent sequence .now we study the convergence of the primal objective function value .one the one hand , since is a saddle point to the lagrangian function defined by , we have for any , .this , together with , implies that for any , on the other hand , from and we know that by combining the above two inequalities together and using we can get -\langle { { \mathcal t}}(z^{k}-z^{k-1}),z_e^{k}\rangle -\sigma\langle{{\mathcal b}}^{*}(z^{k-1}-z^{k}),{{\mathcal a}}^ { * } y_e^{k}\rangle \\[1 mm ] -(1-\tau)\sigma\|{{\mathcal a}}^{*}y_e^{k}+{{\mathcal b}}^{*}z_e^{k}\|^2 \ge f(y^{k})+g(z^{k } ) . \end{array}\ ] ] since the sequences in are bounded , by using , and the fact that any nonnegative summable sequence should converge to zero we know the left - hand - sides of both and converge to when .consequently , by the squeeze theorem .thus , part ( a ) is proved . *( b ) * from we konw that for any , on the one hand , from the boundedness of we know that the sequence is bounded . on the other hand , from , and the boundedness of the sequences in, we can use & -\sigma \langle { { \mathcal b}}^*(z^{k-1}-z^{k}),{{\mathcal a}}^*y^k\rangle - \langle{{\mathcal s}}(y^{k}-y^{k-1 } ) , y^k \rangle \end{array}\ ] ] to get the boundedness of the sequence .hence , from we know the sequence is bounded from above . from we know , together with the fact that the sequences in are bounded , implies that is bounded from below .consequently , is a bounded sequence . by using similar approach, we can obtain that is also a bounded sequence .next , we prove the remaining part of ( b ) by contradiction .suppose that assumption [ as-2 ] holds and the sequence is unbounded .note that the sequence is always bounded .thus it must have a subsequence , with being unbounded and non - decreasing , converging to a certain point . from the boundedness of the sequences inwe know that and are bounded. then we have and , similarly , . by noting that , one has . on the other hand ,define the sequence by from the boundedness of the sequence and the definition of we know that .since , by ( * ? ? ?* theorem 8.2 ) we know that is a recession direction of .then from the fact that we know that , which contradicts assumption [ as-2 ] .the boundedness of under assumption [ as-22 ] can be similarly proved .thus , part ( b ) is proved . *( c ) * suppose that is an accumulation point of .let be a subsequence of which converges to . by taking limits in along with for and using ,and we can see that which can imply that is a solution to the kkt system .now , without lose of generality we reset .then , by part ( a ) we know that the sequence defined in converges to zero if , and the sequence defined in converges to zero if but . thus , we always have as a result , it holds that , and as .moreover , by using the fact that and as , we can get as .this completes the proof of part ( c ) . *( d ) * if and , the subproblems in the admm scheme are strongly convex , hence each of them has a unique optimal solution . then ,by part ( c ) we know that and are convergent .note that is convergent by part ( a ) .therefore , by part ( c ) we know that converges to a solution to the kkt system . hence , part ( d ) is proved and this completes the proof of the theorem . before concluding this note ,we make the following remarks on the convergence results presented in theorem [ theorem : c1 ] .the corresponding results in part ( a ) of theorem [ theorem : c1 ] for the admm scheme ( [ admm ] ) with have been stated in boyd et al . . however , as indicated by the counterexample constructed in section [ sec : example ] , the proofs in need to be revised with proper additional assumptions .actually , no proof on the convergence of has been given in at all .nevertheless , one may view the results in part ( a ) as extensions of those in boyd et al . for the admm scheme ( [ admm ] ) with to a computationally more attractive spadmm scheme with a rigorous proof .the condition that and in part ( d ) was firstly proposed by fazel et al . .note that , numerically , the boundedness of the sequences generated by a certain algorithm is a desirable property and assumptions [ as-2 ] and [ as-22 ] can furnish this purpose .assumption [ as-2 ] is pretty mild in the sense that it holds automatically , even if , for many practical problems where has bounded level sets .of course , the same comment can be applied to assumption [ as-22 ] .the sufficient condition that but simplifies the condition proposed by sun et al .but was used in ( * ? ? ?* theorem 2.2 ) . ] for the purpose of achieving better numerical performance .the advantage of taking the step - length has been observed in for solving high - dimensional linear and convex quadratic semi - definite programming problems . in numerical computations , one can start with a larger , e.g. , and reset it as for some , e.g. , if at the -th iteration one observes that for some given positive constant .since can be reset at most a finite number of times , our convergence analysis is valid for such a strategy. one may refer to ( * ? ? ?* remark 2.3 ) for more discussions on this computational issue .in this note , we have constructed a simple example possessing several nice properties to illustrate that the convergence theorem of the admm scheme ( [ admm ] ) stated in boyd et al . can be false if no prior condition that guarantees the existence of solutions to all the subproblems involved is made . in order to correct this mistakewe have presented fairly mild conditions under which all the subproblems are solvable by using standard knowledge in convex analysis . based on these conditions ,we have further conducted the convergence analysis of the admm under a more general and useful spadmm setting , which has the the flexibility of allowing the users to choose proper proximal terms to guarantee the existence of solutions to the subproblems . in particular , we have established some satisfactory convergence properties of the spadmm with a computationally more attractive large step - length that can exceed the golden ratio of 1.618 . in conclusion , this note has ( i ) clarified some confusions on the convergence results of the popular admm ; ( ii ) opened the potential for designing computationally more efficient admm - type solvers in the future .10 boyd , s. , parikh , n. , chu , e. , peleato , b. and eckstein , j. : distributed optimization and statistical learning via the alternating direction method of multipliers . found .trends mach .* 3*(1),1122 ( 2011 ) fortin , m. , glowinski , r. : augmented lagrangian methods .applications to the numerical solution of boundary value problems .studies in mathematics and its applications , 15 . translated from the french by b. hunt and d. c. spicer .elsevier science publishers b.v .( 1983 ) glowinski , r. : on alternating direction methods of multipliers : a historical perspective . in fitzgibbon , w. , kuznetsov , y.a . , neittaanmaki , p. and pironneau , o. ( eds . ) modeling , simulation and optimization for science and technology , pp .springer , netherlands ( 2014 ) glowinski , r and marroco , a. : sur lapproximation , par lments finis dordre un , et la rsolution , par pnalisation - dualit dune classe de problmes de dirichlet non linaires .revue franaise datomatique , informatique recherche oprationelle .analyse numrique * 9*(2 ) 4176 ( 1975 )
this note serves two purposes . firstly , we construct a counterexample to show that the statement on the convergence of the alternating direction method of multipliers ( admm ) for solving linearly constrained convex optimization problems in a highly influential paper by boyd et al . [ found . trends mach . learn . 3(1 ) 1 - 122 ( 2011 ) ] can be false if no prior condition on the existence of solutions to all the subproblems involved is assumed to hold . secondly , we present fairly mild conditions to guarantee the existence of solutions to all the subproblems and provide a rigorous convergence analysis on the admm , under a more general and useful semi - proximal admm ( spadmm ) setting considered by fazel et al . [ siam j. matrix anal . appl . 34(3 ) 946 - 977 ( 2013 ) ] , with a computationally more attractive large step - length that can even exceed the practically much preferred golden ratio of .
the spread of disease has been one of the focuses in the field of statistical physics for many years .the dynamical behavior of so - called susceptible - infected - susceptible ( sis ) model and susceptible- infected- removed ( sir ) model have been widely investigated on regular network and complex networks[1 - 12 ] . within the studying ,individuals are modeled as sites and possible contacts between individuals are linked by edges between the sites .it is easy to see that both the properties of disease and topological character of network determine the dynamics of the spread of disease .studies have showed that there is an epidemic threshold on regular networks .if the effective spreading rate , the infection spreads and becomes endemic ; otherwise the infection will die out . while the threshold disappears on scale - free networks[4 ] .usually , infectious diseases , such as hiv and computer virus , have the similar spreading property .they not only can spread in one household , but also can spread from one household to another . to study this spreading character ,there have been of considerable interests to epidemic models spreading among a community of households[12 - 17 ] .these studies were concerned with sir model , which can not appear endemic behavior . in 1999, ball introduced the sis household - structure model[18 ] , in which the population is partitioned into households with members in each household .a threshold parameter was defined .it is shown that for the household with members , if then the epidemic die out ; if the epidemic will exist at an endemic equilibrium.this model has also been studied on homogeneous network by the mean of self - consistent field[19,20 ] .the similar results have been obtained .these previous studies about household - structure epidemic model were mainly on regular networks .however , studies have showed that a large number of systems , such as internet , world - wide - web , physical , biological , and social networks , exhibit complex topological properties[21 - 23 ] . in particular , small - world properties[24 ] and scale - free degree distributions[25 ]appear in many real network systems . in this paper, we will analyze the sis household - structure epidemic model on complex networks .the outline is as follows : 1 ) introduction ; 2 ) description of the model ; 3 ) mean - field equations ; 4 ) steady - state solutions ; 5 ) simulation ; 6 ) summary .in complex networks with degree distribution , which is the probability that a given site has connections ( links ) that connect it with other sites ( we say that the given site degree is . ) , there are individuals that are grouped as a household on every site .we assume that these n individuals contact each other fully .a healthy individual may get infected from within the household and from outside its household . the parameters and are the infection rates from outside and from within the household respectively .we give each site a number ) ] ,the networks are scale - free[19 ] . when , , the threshold is absent .this fact implies that for any positive value of the infection can pervade the system , which is the same as the standard sis model[4 ] .[ [ n2 ] ] n2 ~~ let .suppose and .considering eqs.([g1])-([g4 ] ) can be written as: the matrix * s * is : since , so exists .thus: where and from ( [ matrixs ] ) , we get : substituting ( [ g17 ] ) and ( [ g16 ] ) to ( [ theta2 ] ) , we get the self - consistent equation of : that is: obviously , is a solution of eq.([g18 ] ) .in addition , a non - zero solution with and is allowed if the following inequality holds: that is: from ( [ g20 ] ) , we get the epidemic threshold: , and is an increasing function of and , but a decreasing function of the recover rate .so the epidemic threshold is determined by three parameters and the networks degree distribution .we notice that the expression ( [ g21 ] ) involves multiplication of the well - known term [2,4,6,9 ] , which is closely related to the `` average '' number of secondary infections[7,8 ] .not surprising , this result is the same as that of the standard sis model[4 ] . for ,the network is homogeneous. then , we can increase the recover rate or decrease the site degree and the size of the household to lift prevent the infectious disease from spreading . for large threshold is very small . for )$ ] ,the network is scale - free[21 ] . when , , then .so the threshold is absent for scale - free network .this implies that for any positive value of , the infection can pervade the system even with high recover rate .in above section , we have given the analytical result of the sis model with household structure .we find that for regular network there is an epidemic threshold ; while for scale - free network the threshold disappears . for comparison, we simulate the model on regular network(see fig[fig1 ] ) and on scale free network(see fig[fig2 ] ) respectively . for simplicity(without lack of generality ) , we set , . in fig.1, we plot the fraction of infected individuals in the stationary state , , for different values of on regular network with . obviously , there is a threshold for each . for , is , in agreement with the corresponding analytical result , , which can be obtained from([g21 ] ) . only when is increased above is a significant prevalence observed . in fig.2 , we plot the fraction of infected individuals in the stationary state , , for different values of on scale - free network with .we observe that is absent . in contract with the standard sis model ,of which the prevalence , , increases slowly when increasing [24 ] , our current epidemic model exhibits that increases rapidly with .in this work , we analyze the sis model that incorporates social household . we have focused on the impaction of geometrical property of complex networks and on the role of several parameters in the spreading threshold .results show that the large household size n and the high within household infection rate are more likely to cause the spread of disease .but it s worth noticing that , even when local recovery rate is greater than effective infection rate , in divergent networks such as scale - free network , disease still can spread !this results tell us that even the local recover condition is good enough to give local protection , there are still some probability for a wide range disease spreading .it seems that this phenomenon can only exist in divergent networks with household structure .maybe this imply that we have to care about the network structure much more than recover condition during disease spreading .of course , the model we have studied seems more ideal .for example , we have supposed that the existence of the n - member households do not affect the property of the complex networks , and also we do not take the move of the individuals into account .however , the result tells us that the properties of the complex networks play the most important role in the epidemic spreading .this work was supported by the national science foundation of china under grant no .we thank research professor yifa tang for helpful discussion .we also acknowledge the support from the state key laborary of scientific and engineering computering ( lsec ) , chinese academic of science .ball , threshold behaviour in stochastic epidemics among households , in : c.c .heyde , y.v .prohorov , r. pyke and s.t .rachev ( eds . ) , athens conference on applied probability and time series , vol .i , applied probability , lecture notes in statistics * 114 * , 253 ( 1996 ) .fig[fig1 ] the fraction of the infected individuals , a function of the spreading rate for household structure sis model on regular networks with , .the simulations have been averaged over 200 different realizations .fig[fig2 ] the fraction of the infected individuals , a function of the spreading for household structure sis model on scale - free networks with , .the simulations have been run in networks with nodes .
in this paper we study the household - structure sis epidemic spreading on general complex networks . the household structure gives us the way to distinguish inner and the outer infection rate . unlike household - structure models on homogenous networks , such as regular and random networks , here we consider heterogeneous networks with arbitrary degree distribution p(k ) . first we introduce the epidemic model . then rate equations under mean field appropriation and computer simulations are used here to analyze our model . some unique phenomena only existing in divergent network with household structure is found , while we also get some similar conclusions that some simple geometrical quantities of networks have important impression on infection property of infectous disease . it seems that in our model even when local cure rate is greater than inner infection rate in every household , disease still can spread on scale - free network . it implies that no disease is spreading in every single household , but for the whole network , disease is spreading . since our society network seems like this structure , maybe this conclusion remind us that during disease spreading we should pay more attention on network structure than local cure condition . infectious disease ; sis model ; networks 89.75.-hc ; 05.70.ln ; 02.10.yn ; 87.23.ge ; 64.70.-p
future wireless systems should offer connectivity almost everywhere .this objective represents an ambitiously engineering challenge in scenarios where the direct link between two nodes does not have the desired quality , e.g. due to shadowing or distance . on that score , multi - hop communication for coverage extension and meshed network architecturesare currently discussed or scheduled in all wireless networks standards of the next generation .therefore , the relay channel experiences a revival recently .the problem was introduced by van der meulen in in the early seventies .a few years later , cover and el gamal obtained the capacities of the physically degraded and reversely degraded relay channels and upper and lower bounds on the capacity of the general relay channel in .the general problem is still unsolved .fundamental insights about the general problem and recent development can be found in and references therein .we consider a three - node network where one node acts as a relay to enable the bidirectional communication between two other nodes .the two - way communication problem without a relay node was introduced by shannon in in 1961 already .therein , he obtained the capacity region for the average error for the restricted two - way channel , i.e. a feedback between the two nodes is not allowed . nowadays ,this is regarded as the first network information theory problem . in informationtheory it is often assumed that the nodes can transmit and receive at the same time , i.e. full - duplex nodes .this assumption is in wireless communication hard to fulfill , since it is practically difficult to isolate a simultaneously received and transmitted signal using the same frequency sufficiently .therefore , in this work we assume half - duplex nodes . as a natural consequence of this assumptionis that relay communication is performed in phases .often the relay communication should be integrated in existing infrastructures and most protocol proposals base usually on orthogonal components which require exclusive resources for each link . as a consequencethey suffer from an inherent loss in spectral efficiency .this loss can be significantly reduced if bidirectional relay communication is desired .because then the communication can be efficiently performed in two phases . in the first phase , the multiple access phase ( mac ) , the information is transmitted to the relay node . in the succeeding broadcast phase ( bc ) , the relay node forwards the information to its destinations . in and , where gaussian channels are considered , the relay performs superposition encoding in the second phase .the knowledge of the first phase allows the receiving nodes to perform interference cancellation before decoding so that effectively we achieve interference - free transmission in the second phase .another interesting approach , is based on the network coding principle , where the relay node performs an xor operation on the decoded bit streams .but since network coding is originally a multi - terminal source coding problem , such an approach operates on the decoded data and therefore does not deal with channel coding aspects .because of our practical motivation , we apply time - division to separate the bidirectional relay communication into two phases .the optimal coding strategy and capacity region of the general multiple access channel is known . in this work ,we present the optimal broadcast coding strategy of the two - phase bidirectional relay channel based on classical channel coding .it shows that all rate pairs in the capacity region can be achieved using an auxiliary random variable taking two values , i.e. we achieve the capacity region by the principle of time - sharing .thereby , we see an interesting connection to a joint source and channel coding approach for the broadcast channel based on slepian - wolf coding . in a multi - terminal systemthe average and maximal error capacity region can be different , even in the case of asymptotically vanishing errors as is shown by dueck in .while for single - user channels it is of no importance whether we use vanishing average or maximal probabilities of error in the definition of achievable rates , the choice of the error criterion makes a big difference if we pass to the consideration of the strong converses for one - way channels .indeed ahlswede demonstrated in that the strong converse does not hold for the compound channels if we use the average probability of error for the definition of -achievable rates but it is well known that the strong converse is valid if we use maximal error probabilities as was shown by wolfowitz . for these reasons , we will pay a lot of attention to the consideration of the maximal and average error probabilities and the relation between them in the main part of the paper and in the proofs .the paper is organized as follows : in the following two subsections we present the two - phase bidirectional relay model , which describes the context of the bidirectional broadcast channel and after that we briefly restate the mac capacity region for completeness . in section [ sec : bccap ] we prove a coding theorem and a weak converse for the maximum error probability .the proof shows that the capacity region is independent of whether we use asymptotically vanishing average or maximum probability of error . in section [ sec : strongconv ] we prove the strong converse for the maximum error probability using the blowing - up lemma . finally , from this we can deduce that the ] or and equals the ] .based on the capacity regions of the two phases the time - division between mac and bc phase can be optimized .this gives us the largest achievable rate region for the finite alphabet discrete memoryless bidirectional relay channel under the simplification of time - division into two phases , which will be discussed in section [ sec : discussion ] by means of a binary channel example .we consider a three - node network with two message sets and . in our bidirectional channelwe want the messages located at node 1 and the message located at node 2 to be known at node 2 and node 1 , respectively .we assume that there is no direct channel between node 1 and 2 .therefore , node 1 and 2 need the support of a relay node r. we simplify the problem by assuming an a priori separation of the communication into two phases .furthermore , we do not allow cooperation between the encoders at node 1 and node 2 . otherwise , a transmitted symbol could depend on previously received symbols . for a two - way channelthis is known as a restricted two - way channel . with this simplificationwe end up with a multiple access phase , where node 1 and 2 transmit messages and to the relay node , and a broadcast phase , where the relay forwards the messages to node 2 and 1 , respectively .we look at the two phases separately . after that we will briefly consider the optimal time - division between the two phases . in the multiple access phase ( mac )we have a classical multiple access channel , where the optimal coding strategy and capacity region is known , .we will restate the capacity region in the next subsection .thereby , let and denote the achievable rates between node 1 and 2 and the relay node in the mac phase . for the broadcast phase ( bc ), we assume that the relay node has successfully decoded the messages and in the multiple access phase . from the unionbound we know that the error probability of the two - phase protocol is at most the sum of the error probability of each phase .therefore , an error - free mac phase is reasonable if we assume rates within the mac capacity region and a sufficient coding length . from thiswe have a broadcast channel where the message is known at node 1 and the relay node and the message is known at node 2 and the relay node , as depicted in figure [ fig : model ] .thereby , let , and denote the input and , and the output symbols of node 1 , node 2 , and the relay node , respectively .furthermore , let and denote the achievable rates between the relay node and node 1 and 2 in the bc phase .the mission of the relay node is to broadcast a message to node 1 and 2 which allows them to recover the unknown source .this means that node 1 wants to recover message and node 2 wants to recover message .we will present an information theoretic optimal coding strategy and the capacity region of the bidirectional broadcast channel in section [ sec : bccap ] . in this subsection, we restate the capacity region of the multiple access channel , which was found by ahlswede and liao and is part of any textbook on multiuser information theory , e.g. . a _ discrete memoryless multiple access channel _ is the family with finite input alphabets , , and the finite output alphabet where the probability transition functions are given by for a given probability transition function .the capacity region of the memoryless multiple access channel is the set of all rate pairs ] with values in and joint distribution .furthermore , the range of the auxiliary random variable has a cardinality bounded by .in this section we present our main result , the capacity region of a broadcast channel where the receiving nodes have perfect knowledge about the message which should be transmitted to the other node .the capacity region can be achieved by classical channel coding principles .first we need to introduce some standard notation .let and , , be finite sets .a _ discrete memoryless broadcast channel _ is defined by a family of probability transition functions given by for a probability transition function , i.e. is a stochastic matrix . in what followswe will suppress the super - index in the definition of the -th extension of the channel , i.e. we will write simply instead of .this should cause no confusion since it will be always clear from the context which block length is under consideration .in addition , we will use the abbreviation , where and denote the message sets . a -code for the _ bidirectional broadcast channel _ consists of one encoder at the relay node and a decoder at node one and two the element in the definition of the decoders is included for convenience only and plays the role of an erasure symbol .when the relay node sends the message ] is said to be _ achievable _ for the bidirectional broadcast channel if for any there is an and a sequence of -codes such that for all we have and while when .the set of all achievable rate pairs is the _ capacity region _ of the bidirectional broadcast channel and is denoted by .achievable rate pairs and a capacity region can be also defined for average probability of error .[ theorem : capacity ] the capacity region of the bidirectional memoryless broadcast channel is the set of all rate pairs ] with values in and joint probability distribution .the cardinality of the range of can be bounded by .the theorem is proved in the following three subsections . in the first subsectionwe prove the achievability , i.e. a coding theorem .we prove a weak converse with respect to the maximum probability of error in the second subsection .then the theorem is proved with the third subsection where we show that a cardinality of two is enough for the range of the auxiliary random variable . here, we adapt the random coding proof for the degraded broadcast channel of to our context . first , we prove the achievability of all rate pairs ] of length with and according to . to send the pair ] .this gives the decoding set and indicator function when with ] . in the following ,we show that if for any , we have \rightarrow 0 ] we have to distinguish between the receiving nodes .we present the analysis for , the case follows accordingly .thereby , we use the fact that for \neq[w_1,\hat{w}_2] ] . since the average probabilities of error over the codebooks is small , there exists at least one codebook with a small average probabilities of error .this implies that we have and .we define sets since , we can bound the cardinality for .then from it follows now , let be the set of having the property that for each there are at least choices of so that \in{{\cal{q}}} ] .accordingly , we have so that it follows that using .this means that there exists an index set with indices , to each of which we can find an index set with indices so that we have for each and a maximum error , .it follows that there exist one - to - one mappings , , for each with ] .accordingly , there exist mappings , , with ] , which can be made arbitrary close to ] . from the chain rule for entropies we have is a function of and , we have .further , since is a binary - valued random variable , we get . so that finally with the next inequality (w_2|y_1^n , w_1,e_1=0 ) + { { \mathbbm{p}}}[e_1=1]h(w_2|y_1^n , w_1,e_1=1)\nonumber\\ & \leq(1-\mu_1^{(n)})0+\mu_1^{(n)}\log(|{{\cal{w}}}_2|-1 ) \leq\lambda_1^{(n ) } \log|{{\cal{w}}}_2|\nonumber\end{aligned}\ ] ] we get fano s inequality for our context .therewith , we can bound the entropy as follows where the equations and inequalities follow from the independence of and , the definition of mutual information , lemma 1 , the chain rule for mutual information , the positivity of mutual information , and the data processing inequality .if we divide the inequality by we get the rate using the memoryless property and again standard arguments .a similar derivation for the source rate gives us the bound with for as .this means that the entropies and are bounded by averages of the mutual informations calculated at the empirical distribution in column of the codebook .therefore , we can rewrite these inequalities with an auxiliary random variable , where with probability .we finish the proof of the converse with the following inequalities and accordingly where , , when .thereby , and are new random variables whose distribution depend on in the same way as the distributions of and depend on . up to now the auxiliary random variable is defined on a set with arbitrary cardinality .next , we will show that is enough . with fenchel bunt s extension of carathodory s theorem it follows that any rate pair in is achievable by time - sharing between two rate pairs from , i.e. is enough .[ theo : fenchel ] if has no more than connected components ( in particular , if is connected ) , then any can be expressed as a convex combination of elements of . since for any have \in{{\cal{r}}}(p(x))]-achievable rate pairs .then it follows from the strong converse for the maximum probability of error that the ]-capacity region in terms of average probability of error for sufficiently small average error .here , we derive a sharper converse to the coding theorem for the bidirectional broadcast channel .we prove the full strong converse for the capacity region defined with respect to the maximum error probability , i.e. for all .additionally , we show that the ] is said to be ]-achievable rates with respect to the maximum probability of error is denoted by .it is clear that and hold .the content of the strong converse is that can not be a proper subset of for : [ strong - converse - max - error ] for memoryless bidirectional broadcast channel we have for all .let ]-achievable rate pair , thus , by definition , for any we can find a sequence of -codes and such that for all following conditions are satisfied : 1 . and .2 . for . for those consider the families of partitions associated with the decoder maps , i.e. for each we have a partition of and analogously for each a partition of such that for all and we have and where .according to the second part of theorem [ blowing - up ] we can find a sequence of positive integers with such that for the sets we have and with .the sets are not necessarily disjoint for different values of .the same applies to the sets .nevertheless , we show now that for any given each is contained in at most sub - exponentially many . to this end , for any given and we define the set and claim that holds . the proof is given in .we reproduce the full argument for convenience .it is obvious that if and only if .therefore , since the sets are disjoint , we have with by the first part of theorem [ blowing - up ] .a similar result holds for the analogously defined set .let us consider two independent , uniformly distributed random variables and taking values in the sets and and a random variable with values in such that then the probability distribution of the whole system is given by for , , and .furthermore , for given and let us define as in the proof of the weak converse one can show that holds .now , we need a variant of fano s inequality which incorporates the quantity defined in ( [ sc-2 ] ) . therefore , we use the following elementary entropy inequality : for a probability distribution on a finite set and an arbitrary we have then for given and we set and obtain where we have applied eq .( [ sc-4 ] ) to each sum and then used eq .( [ sc-1 ] ) with the abbreviation . denotes the entropy of the distribution . averaging with respect to and using the concavity of the entropy we arrive at with .note that by ( [ sc-2 ] ) , our definition of in eq .( [ sc - x ] ) and ( [ sc - y ] ) we have where the third equality holds since iff and the last inequality is by eq .( [ sc-0 ] ) . thus ( [ sc-3 ] ) , ( [ sc-6 ] ) and ( [ sc-7 ] ) show that similar reasoning shows that also holds .it is obvious that as in the proof of the weak converse the mutual informations on the right hand sides of ( [ sc-8 ] ) and ( [ sc-9 ] ) can be written as and for a suitable random variable taking values in .note that by the proof of the coding theorem with the weak converse the rates and are achievable .thus , we can conclude our proof by noting that for sufficiently large we have and and that is closed .this shows that and we are done .we give now the partial extension of theorem [ strong - converse - max - error ] to the capacity region which is defined similarly to the difference being only that we use the average probability of error .our strategy will be to reduce the statement to the theorem [ strong - converse - max - error ] for sufficiently small . for memorylessbidirectional broadcast channel it holds that for all and or and .let \in { { { \cal{c}}}_{\mathrm{bc , av}}}({{\varepsilon}}_1,{{\varepsilon}}_2) ] .thus , we can apply our theorem [ strong - converse - max - error ] to conclude that for and \in { { { \cal{c}}}_{\mathrm{bc}}} ] achievable using xor at the relay node according to .we will now look at the achievable bidirectional rate region where we use in each phase the optimal strategies .thereby , we optimize the time - division between the mac phase with memoryless multiple access channel and bc phase with memoryless broadcast channel . of course ,due to the a priori separation into two phases , this strategy need not be the optimal strategy for the bidirectional relay channel .let and denote the achievable rates for transmitting a messages from node 1 to node 2 and a message from node 2 to node 1 with the support of the relay node . in more detail , node 1 wants to transmit message with rate in channel uses of the bidirectional relay channel to node 2 .simultaneously , node 2 wants to transmit message with rate in channel uses to node 1 .then let and denote the number of channel uses in the mac phase and bc phase with the property ] and \in{{{\cal{c}}}_{\mathrm{bc}}} ] which are achievable with any time - division factor ] half of the arithmetical mean between the boundary rate pairs of the capacity regions where we have .in this work we present the broadcast capacity region of the two - phase bidirectional relay channel . thereby , each receiving node has perfect knowledge about the message intended for the other node .furthermore , the proposed achievable rate region of the two - phase bidirectional relay channel is in general larger than the rate region which can be achieved by applying the network coding principle on the decoded data .the coding theorem and weak converse are easily extended to gaussian channels with input power constraints .we have also shown the strong converse with respect to the maximum error criterion for the broadcast phase .this result implies then that the capacity region defined with respect to the average error probability remains constant for all error parameters \in ( 0,\frac{1}{2})\times ( 0,\frac{1}{4}) ] .t. j. oechtering and h. boche , `` optimal resource allocation for a bidirectional regenerative half - duplex relaying , '' in _ ieee international symposium on information theory and its applications ( isita 06 ) _ ,seoul , korea , 2006 , pp .528 533 .y. wu , p. a. chou , and s. y. kung , `` information exchange in wireless networks with network coding and physical - layer broadcast , '' in _ proceedings of the 39th annual conference on information sciences and systems ( ciss ) _ , march 2005 .c. schnurr , t. j. oechtering , and s. staczak , `` on coding for the broadcast phase in the two - way relay channel , '' in _ proceedings of the 41st annual conference on information sciences and systems _ , 2007 .r. ahlswede , `` on two - way communication channels and a problem by zarankiewicz , '' in _sixth prague conf . on inf .fct s and rand .proc.__1em plus 0.5em minus 0.4empubl .house chechosl .academy of sc . , sept .
in a three - node network a half - duplex relay node enables bidirectional communication between two nodes with a spectral efficient two phase protocol . in the first phase , two nodes transmit their message to the relay node , which decodes the messages and broadcast a re - encoded composition in the second phase . in this work we determine the capacity region of the broadcast phase . in this scenario each receiving node has perfect information about the message that is intended for the other node . the resulting set of achievable rates of the two - phase bidirectional relaying includes the region which can be achieved by applying xor on the decoded messages at the relay node . we also prove the strong converse for the maximum error probability and show that this implies that the $]-capacity region defined with respect to the average error probability is constant for small values of error parameters , .
supercapacitors , also called ultracapacitors or electrochemical double layer capacitors ( edlcs ) , bridge the gap between batteries and conventional dielectric capacitors .the most common electrode material is carbon - based and include activated carbon , carbon nanotubes , and graphene . to go beyond electrostatic charge storage in the double layer ,pseudocapacitive and faradaic components ( `` faradaic components '' henceforth ) such as conducting polymers and metal oxides were incorporated into supercapacitors .two ways of incorporating the faradaic material have been employed , both in an asymmetric ( or hybrid ) configuration where : ( 1 ) a carbon electrode and a faradaic electrode form the two electrodes of a supercapacitor ; ( 2 ) at least one of the electrodes is a composite material that combines advantages of electrochemical double layer charge storage as well as surface redox or intercalation charge storage .carbon nanotubes ( cnt ) have a high aspect ratio and form porous networks with good conductivity . as such, they have been implemented as supercapacitor electrodes in many instances in the literature .titanium oxide is a versatile compound and can be found in several energy harvesting and energy storage applications , namely photocatalysis , secondary batteries , and supercapacitors . among the three polymorphs of tio, the crystal structure of anatase can accommodate the most li+ per tio unit and is an attractive energy storage material due to its li - intercalation capabilities and cycle life .tio nanowires have been synthesized by hydrothermal methods , functioning as the anode of a supercapacitor with carbon nanotubes acting as the cathode .these are type ( 1 ) asymmetric supercapacitors . in this study ,the application of a nanocomposite containing tio nanoparticles and multiwall carbon nanotubes in a single supercapacitor electrode is reported .results from conventional as well as new electrochemical characterization methods show not only the various aspects of the energy storage mechanism in the electrode but also the combination of double layer and faradaic capacitance as a synergy that leads to high energy density and good power density in a single electrode .cyclic voltammetry ( cv ) as well as galvanostatic cycling ( gc ) were performed on the cnt - tio composite electrodes .cyclic voltammetry of the composite electrode at various scan rates was measured in 1 m liclo in propylene carbonate with lithium metal sheets as the reference electrode and counter electrode , shown in figure 1 .the voltage window was 1 v to 2.6 v vs. li / li+ .a series of galvanostatic measurements were also performed , two of which are shown in figure 2 .composite electrode at various scan rates in 1 m liclo in propylene carbonate with lithium metal sheets as the reference electrode and counter electrode . ]composite electrode with applied currents densities 0.71 a / g and 3.6 a / g . ]the electrode capacitance is given by and the specific capacitance by .the storage capacity of the composite electrode decreases with increasing current but approaches a constant value , shown in figure 3 along with the coulombic efficiency .the galvanostatic charging for a given current is described by where is the capacitance of the electrode and is the internal resistance .the quantity , denoted by , is the voltage drop at the beginning of charging and discharging cycles .the temporal slope plotted as a function of the potential is shown in figure 4 .these plots will be called temporal slope voltammograms ( tsvs ) in this article .similar to cvs , the anodic peaks and cathodic peaks shift as the applied current increases .however , whereas the faradaic / intercalation peaks flatten as the scan rates increase in cvs , the temporal slope peaks remain sharp even for large current densities . as shown in figure 5a , as vanishes , both anodic and cathodic peak potentials approach the same value , the standard potential for the intercalation reaction . a plot of the temporal slope peaks versus shows a linear relationship ( figure 5b ) .linear regression gives the peak capacitances at , 2410 f / g for li extraction and 1450 f / g for li insertion .as a point of reference , the theoretical specific capacitance of tio assuming one li+ ion per tio molecule is 2295 f / g , calculated using the formula , where is the faraday constant and is the tio molar mass .the peak capacitance at the standard potential of the faradaic reaction could be an important figure of merit in characterizing and optimizing nanocomposites as the mass normalized quantity is nearly independent of the double layer contributions .in addition to temporal slope voltammograms , a new method to distinguish between electrostatic and faradaic contributions to the energy storage capacity warrants discussion . the distinction is important because one is often interested in the effect of nanoparticle size on the faradaic reactions and whether the faradaic process involves only surface sites or the bulk .the differential form of the capacitor equation includes both capacitor behavior , where the capacitance is a constant that depends only on the capacitor13s geometry and dielectric material , and battery behavior , where the potential is ideally constant and the storage capacity varies linearly with the state of charge . in the case where the potential is the experimentally tunable parameter , charge conservation requires the current associated with changes in the double layer is given by and the current due to reversible charge transfer reactions at the electrode surface is described by where , is the starting ion concentration at the electrode - electrolyte interface , is the width of the diffusion layer , is the area of the tio nanoparticles , and is the standard potential of the reaction .it follows that the charge stored is the charge accumulation as a function of potential can be obtained from galvanostatic measurements , as shown in figure 6 .the double layer capacitance can be obtained from the slope of for sufficiently large potentials , after available tio lattice sites are fully intercalated . .the anodic and cathodic intercepts are both 1.9 v. ( b ) temporal slope peaks are linear in .the absolute values of the two linear regression slopes yield peak capacitances . ]dielectric capacitors experience changes in potential at a constant storage capacity , whereas batteries experience changes in the storage capacity at a constant potential .the methods outlined in this article enable researchers to electrochemically characterize nanocomposites that exhibit both capacitor - like and battery - like behaviors .the performance of the cnt - tio electrode can be compared to lithium - ion batteries and other electrochemical capacitors in the form of a ragone chart , shown in figure 7 .the storage capacity and the voltage range can be converted to energy density and power density with the formulas the synergy between the two materials gives rise to high energy and good power . from the perspective of the high energy density li - intercalation material , namely the tio nanoparticles , the addition of carbon nanotubes led to a significant increase in power density .electrode with respect to li - ion batteries and other electrochemical capacitors .[ adapted from ref .18 with permission . ]a supercapacitor electrode was fabricated from a nanocomposite consisting of multiwall carbon nanotubes and titanium oxide nanoparticles .conventional electrochemical characterizations cyclic voltammetry and galvanostatic cycling gave a specific capacitance of 345 f / g at a current density of 0.1 a / g .new electrochemical characterization techniques derived from galvanostatic measurements allow one to obtain the peak capacitance associated with intercalation and to distinguish between electrostatic and faradaic contributions to the total charge stored .the new techniques show that most of the charge is stored faradaically , via the intercalation mechanism .the double layer charge storage mainly attributed to carbon nanotubes brought significant improvement in power density to the faradaic material .as a nanocomposite , cnt - tio achieved a maximum energy density of 31 wh / kg .helpful discussions from george grner , bruce dunn , ryan maloney , veronica augustyn as well as measurement support and tio nanoparticles from veronica and jesse ko are acknowledged and appreciated .
supercapacitor electrodes fabricated from a nanocomposite consisting of multiwall carbon nanotubes and titanium oxide nanoparticles were characterized electrochemically . conventional electrochemical characterizations cyclic voltammetry and galvanostatic charge - discharge in a lithium - based electrolyte gave a specific capacitance of 345 f / g at a current density of 0.1 a / g . new electrochemical characterization techniques that allow one to obtain the peak capacitance associated with intercalation and to distinguish between electrostatic and faradaic charge storage are described and applied to the electrode measurements . finally , the maximum energy density obtained was 31 wh / kg .
cosmic microwave background ( cmb ) radiation is one of the most ancient fossils of the universe .the observations of the nasa wilkinson microwave anisotropy probe ( wmap ) satellite on the cmb temperature and polarization anisotropies have put tight constraints on the cosmological parameters . in addition , some anomalies in cmb field have also been reported soon after the release of the wmap data ( see as a review ) . among these ,an extremely cold spot ( cs ) centered at galactic coordinate ( , ) with a characteristic scale about was detected in the spherical mexican hat wavelet ( smhw ) non - gaussian analyses .compared with the distribution derived from the isotropic and gaussian cmb simulations , due to this cs , the smhw coefficients of wmap data have an excess of kurtosis .in addition , several non - gaussian statistics , such as the amplitude and area of the cold spot , the higher criticism and so on , have also been applied to identify this wmap cs . since then , various alternative explanations for the cs have been proposed , including the possible foregrounds , sunyaev - zeldovich effect , the supervoid in the universe , and the cosmic texture . in order to distinguish different interpretations , some analyses have been carried out , such as the non - gaussian tests for the different detectors and different frequency channels of wmap satellite , the investigation of the nvss sources , the survey around the cs with megacam on the canada - france - hawaii telescope , the redshift survey using vimos on vlt towards cs , and the cross - correlation between wmap and faraday depth rotation map .nearly all the interpretations of cs are related to the local characters of the cmb field , so the studies on the local properties of cs are necessary . in this paper, we shall propose a set of novel non - gaussian statistics , i.e. the local mean temperature , variance , skewness and kurtosis , to study the local properties of the cmb field . by altering the radium of the cap around cs , we study the local properties of cs at different scales .compared with the _ coldest spots _ in the random gaussian simulations , we find the local non - gaussianity of wmap cs , i.e. it deviates from gaussianity at significant level .furthermore , we find the significant difference between wmap cs and gaussian simulations at all the scales . to study the possible origin of wmap cs , we have also compared it with the spots at the same position of the simulated gaussian samples .we find that different from the general properties of the foregrounds , the point sources or various local contaminations , in the small scales the local variance , skewness and kurtosis values of cs are not significantly large , except for its coldness in temperature .however , after the careful comparison with gaussian simulations , we find that when the local variance and skewness are systematically large .this implies that cs prefers a large - scale non - gaussian structure . in order to confirm it ,we repeat the analyses adopted by many authors , where the statistics of temperature and kurtosis in smhw domain are used .we apply these analyses to the wmap data with different , and find that nearly all the non - gaussianities of cs are encoded in the low multipoles .it was claimed that the cosmic texture seemed to be the most promising explanation , by investigating the temperature and area of cs . in order to check this explanation by our local statistics ,we superimpose a similar cosmic texture into the simulated gaussian samples , and calculate the local statistics of the cmb fields .we find that the excesses of the local statistics of wmap cs can be excellently explained by this non - gaussian structure .so our local analyses of the cs supports the cosmic texture explanation .the rest of the paper is organized as follows : in section 2 , we introduce the wmap data , which will be used in the analyses . in section 3 ,we define the local statistics and apply them to wmap data . in section 4 , the dependence of the wmap non - gaussianities on the value of are studied , which shows that the non - gaussian signals are all encoded in the low multipoles .section 5 summarizes the main results of this paper .in our analyses , we shall use the wmap data including the vw7 map , ilc7 map and nilc5 map .the cmb temperature maps derived from the wmap observations are pixelized in healpix format with the total number of pixels . in our analyses, we use the 7-year wmap data for v and w frequency bands with . the linearly co - added map ( written as vw7 " )is constructed by using an inverse weight of the pixel - noise variance , where denotes the pixel noise for each differential assembly ( da ) and represents the full - sky average of the effective number of observations for each pixel .the wmap instrument is composed of 10 das spanning five frequencies from 23 to 94 ghz .the internal linear combination ( ilc ) method has been used by wmap team to generate the wmap ilc maps .the 7-year ilc ( written as ilc7 " ) map is a weighted combination from all five original frequency bands , which are smoothed to a common resolution of one degree . for the 5-year wmap data , in authors have made a higher resolution cmb ilc map ( written as nilc5 " ) , an implementation of a constrained linear combination of the channels with minimum error variance on a frame of spherical called needlets . in this paper, we will also consider both these ilc maps for the analysis .note that all these wmap data have the same resolution parameter , and the corresponding total pixel number . in comparison with wmap observations to give constraints on the statistics , a cosmological model is assumed with the parameters given by the wmap 7-year best - fit values : , , , , and at .we simulate the cmb maps for each frequency channel by considering the wmap beam resolution and instrument noise , and then co - add them with inverse weight of the full - sky averaged pixel - noise variance in each frequency to get the simulated vw7 maps .similar to the previous work , to simulate the ilc7 map , we ignore the noises and smooth the simulated map with one degree resolution . and for nilc5 , we consider the noise level and beam window function given in . in all the random gaussian simulations ,we assume that the temperature fluctuations and instrument noise follow the gaussian distribution , and do not consider any effect due to the residual foreground contaminations .in this section , we shall investigate the local properties of the cmb field , especially the wmap cold spot , by using the local statistics : mean temperature , variance , skewness and kurtosis .the statistics of local skewness and kurtosis were firstly introduced in . for a given cmb map with ( vw7 , ilc7 or nilc5 ) , we degrade it to the lower resolution to reduce the effect of the noises . andthen , for this degraded map , the constructive process can be formalized as follows : let be a spherical cap with an aperture of degree , centered at .we can define the functions ( mean temperature ) , ( standard deviation ) , ( skewness ) and ( kurtosis ) that assign to the cap , centered at by the following way : where is the number of pixels in the cap , is the temperature at pixel .obviously , the values and obtained in this way for each cap can be viewed as the measures of non - gaussianity in the direction of the center of the cap . for a given aperture ,we scan the celestial sphere with evenly distributed spherical caps , and build the - , - , - , -maps . in our analyses, we have chosen the locations of centroids of spots to be the pixels in resolution . by choosing different values, one can study the local properties of the cmb field at different scales . in , the statistics and with large values have been applied to study the large - scale global non - gaussianity in the cmb field . however , in this paper we shall apply them to study the cmb local properties .( left ) and ( right ) maps for vw7 data . in both maps ,we have adopted .,title="fig:",width=151 ] ( left ) and ( right ) maps for vw7 data . in both maps ,we have adopted .,title="fig:",width=151 ] it is important to mention that these definitions can not well localize the non - gaussian sources .for example , in fig .[ fig1 ] the kurtosis map ( left panel ) , we find the clear circular morphology around the point sources .this means that the values of always maximize / minimize at the edge of the circles , rather than the center of the circles . to overcome it and localize the non - gaussian sources , it is better to define the following average quantities , where for mean temperature , standard deviation , skewness and kurtosis . is the corresponding local quantities defined above , and is again the number of pixels in the cap . for the comparison , we plot the corresponding in the right panel of fig .[ fig1 ] .now , let us apply the method to the cmb maps .firstly , we consider the vw7 map . by choosing , we plot maps in fig .the figures clearly show that these local statistics are sensitive to the foreground residuals and various point sources . from -map ,one finds that most non - gaussianities come from the galactic plane around .however , from - , - and -maps , various extra point sources far from the galactic plane are clearly presented .these contaminations can be well excluded by the kq75y7 mask , which is clearly shown in fig .[ fig22 ] . in this figure ,we plot the same figures as those in fig .[ fig2 ] , but the mask is applied . similarly , we also apply the method to ilc7 and nilc5 maps by choosing .the results are shown in fig .[ fig3 ] and fig .we find that these ilc maps are much cleaner than vw7 map in all the four - , - , - , -maps .even so , from the - , - , -maps , we also find some non - gaussian sources in the galactic plane .in addition , two significant sources at ( ) and ( ) are clearly presented in ilc7 maps , which have been identified as the known point sources , and excluded by the kq75y7 mask .nilc5 map is slightly clearer than ilc7 , especially the significant point sources at ( ) and ( ) disappear now .but the contaminations in the galactic plane are still quite significant .map , map , map and map for vw7 , where .note that the and maps have the unit : mk.,title="fig:",width=151 ] map , map , map and map for vw7 , where .note that the and maps have the unit : mk.,title="fig:",width=151 ] map , map , map and map for vw7 , where .note that the and maps have the unit : mk.,title="fig:",width=151 ] map , map , map and map for vw7 , where .note that the and maps have the unit : mk.,title="fig:",width=151 ] , but kq75y7 mask has been applied.,title="fig:",width=151 ] , but kq75y7 mask has been applied.,title="fig:",width=151 ] , but kq75y7 mask has been applied.,title="fig:",width=151 ] , but kq75y7 mask has been applied.,title="fig:",width=151 ] , but vw7 map is replaced by ilc7 map.,title="fig:",width=151 ] , but vw7 map is replaced by ilc7 map.,title="fig:",width=151 ] , but vw7 map is replaced by ilc7 map.,title="fig:",width=151 ] , but vw7 map is replaced by ilc7 map.,title="fig:",width=151 ] , but vw7 map is replaced by nilc5 map.,title="fig:",width=151 ] , but vw7 map is replaced by nilc5 map.,title="fig:",width=151 ] , but vw7 map is replaced by nilc5 map.,title="fig:",width=151 ] , but vw7 map is replaced by nilc5 map.,title="fig:",width=151 ] in this subsection , we shall focus on the local statistics for wmap cs , and compare with those of the _ coldest spot _ in random gaussian simulations . for a given map ( or ) derived from wmap data , the values of centered at csare calculated for the scales of , , , , , , , , , , , , , , . from fig .[ fig2 ] , we find in the maps derived from vw7 data , there are many point sources .so , for a fair comparison , in this subsection we shall only consider the ilc7 and nilc5 maps .the statistics for the ilc7 maps are displayed in fig .we compare them with 500 gaussian simulations . for each simulated temperature anisotropy map with , we search for the _ coldest spot _ and its position , which will be used for the comparison . by the exactly same process, we derive the corresponding maps .then , for each and , we study the distribution of 500 values ( is the statistic of the _ coldest spot _ in the corresponding simulation ) , and construct the confident intervals for the statistic .the and confident intervals are illustrated in fig .[ fig5 ] .as we can imagine , if cs is simply cold without any other non - gaussianity , the statistics for , and should be normal , i.e. close to the mean values of gaussian simulations for any .on the other hand , if cs is a combination of some small - scale non - gaussian structures , as some explanations in , the local variance , skewness and kurtosis in small scales should be quite large . however , as we will show below , none of these is the case of wmap cs . from fig .[ fig5 ] , we find that for statistic , wmap cs is excellently consistent with gaussian simulations when . however , when , it deviates from simulations at more than confident level .this is caused by the fact that wmap cs is surrounded by an anomalous hot ring - like structure , which is firstly noticed by zhang & hunterer in .for statistic , deviations from gaussianity outside the confident regions are at the scales and .furthermore , the deviations outside the confident regions are detected in skewness at scales of and in kurtosis at scales of .for the nilc5 map , the similar deviations for these statistics have also been derived . combining these results, we find that wmap cs seems to be a nontrivial large - scale structure , rather than a combination of some small non - gaussian structures ( for instance , the point sources or foreground residuals , which always follow the non - gaussianity in the small scales as shown in fig .[ fig2 ] ) .this is one of the main conclusions of this paper .we now consider , in more details , the most significant deviation from gaussianity obtained in fig .[ fig5 ] . similar to , for each panel of fig .[ fig5 ] , we define the statistic as follows : where . and run through to . are the values of the statistics for wmap cs , and are those for the simulations . is average value of . is the covariance matrix of the vector .note that the correlations between and ( ) are very strong ( the corresponding correlation coefficienta are all larger than 0.6 ) , which significantly affect the corresponding value , especially when the values of oscillate for different .the total value can also be defined as .we list the values ( case 1 ) in table [ table1 ] for ilc7 and in table [ table2 ] for nilc5 . in order to be compared with gaussian simulations , for each realization , we repeat the calculation in eq.([chi2 ] ) , but the quantities of wmap cs are replaced by the corresponding quantities of the gaussian realization .[ fig6 ] illustrates the histogram of statistic for the ilc7 based map , where we find that wmap cs in ilc7 deviates from gaussianity at the significant level . at the same time, we also obtain the same results from nilc5 map .statistic for the _ coldest spots _ obtained from 500 monte carlo simulations .the observed statistic for wmap ilc7 map is shown by the solid vertical line ( red online).,width=302 ] .the values of various statistics for ilc7 based maps . in case 1 , wmap cs compares with the _ coldest spots _ in 500 random gaussian simulations , in case 2 , wmap cs compares with the spots at ( , ) in 500 simulations , and case 3 is same with case 2 , but a cosmic texture has been superimposed in each simulated sample .[ table1 ] [ cols="<,<,>,>,>,>,<,>,<,>",options="header " , ]if wmap cs is a large - scale non - gaussian structure , as we have found in previous section , the non - gaussianity caused by cs should be encoded in the low multipoles , rather than the high multipoles . in this section, we shall confirm it by studying the effect of different multipoles on the wmap non - gaussian signals .following , in this section we study the non - gaussianity of wmap data by using the wavelet transform , which can emphasize or amplify some features of the cmb data at a particular scale .the smhws are defined as where is the stereographic projection variable , and is the co - latitude . is the scale , and is the constant for the normalization , which can be written as ^{-1/2}.\ ] ] the continuous wavelet transform stereographically projected over the sphere with respect to is given by where and are the stereographic projections to sphere of center of the spot and the dummy location , respectively . in our analyses of this section ,the locations of centroids of spots are chosen to be centers of pixels in resolution .following , we define the occupancy fraction as follows to account for the masked parts of the sky , where is kq75y7 mask . in order to reduce the biases due to masking, we only include the results of for which .( upper ) and ( lower ) for various cases . in each panel ,the dark line ( blue online ) is for the results of vw7 map , and grey line ( red online ) is for ilc7 map .the _ circles _ ( red online ) are for the ilc7 with , the _ crosses _ ( red online ) are for that with , and the _ squares _ ( red online ) are for that with .,width=302 ] in our analyses of this section , we shall consider the vw7 and ilc7 data . we degrade them to a lower resolution ,then apply the kq75y7 mask . for eachmasked wmap data , we use the smhw transform in eq .( [ smhw ] ) to get the corresponding map in wavelet domain . to investigate the non - gaussianity in different scales , for each map we consider the cases with , , , , , , , , , .similar to many authors , we define the statistics and as follows to study the non - gaussianity related to wmap cs : here is the standard deviation of the distribution of all spots in a given map , and is the _coldest spot _ in this distribution . from the definitions , we find that describes the cold spot significance , and is the kurtosis of spots in a given map . in fig . [ fig8 ]( left panels ) , we present the values of and for different scale parameter . both vw7 and ilc7illustrate the same results : the values of both and maximize at .the results then are compared with 2000 randomly generated gaussian simulations , with the exactly same methodology applied .so we can get the probabilities of simulations , which have the larger or than those of wmap data .these probabilities for both statistics are also shown in fig .[ fig8 ] ( right panel ) .so , similar to other works , we find that when , wmap data have the deviations from the gaussian simulations , i.e. the corresponding probabilities for the statistics and/or are smaller than . now , let us study which multipoles account for the non - gaussianity above . we consider the original ilc7 map , and expand it via spherical harmonic composition : where are the spherical harmonics and are the corresponding coefficients .then the new map can be constructed as follows : it is clear that this new map includes only the low multipoles $ ] .thus , we can repeat the processes above , but the ilc7 map is replaced by . in the analyses , we choose three cases with , and to study the effect of different multipoles , and show the results in fig .[ fig8 ] with _ circles _ , _ crosses _ and _ squares _ , respectively .we find that for the statistic , if the lowest multipoles are considered , the values of the statistic and the corresponding probabilities are quite close to those gotten in the map including all the multipoles .these clearly show that the coldness of cs are mainly encoded in these lowest multopole range , which is consistent with the conclusion in . while for the statistic , we find the wmap data are quite normal for the case with , compared with the gaussian simulations .however , if is considered , the results for both statistics are very close to those in the map with all multipoles .so we conclude that wmap cs reflects directly the peculiarities of the low multipoles , which suggests that cs should be a large - scale non - gaussian structure , rather than a combination of some small structures .this consists with our conclusion in section 3 .since the discovery of the non - gaussian cold spot in wmap data , it has attracted a great deal of attention , and many explanations have been proposed . to distinguish them , in this paperwe have studied the local properties of wmap cs at different scales by introducing the local statistics including the mean temperature , variance , skewness and kurtosis .compared with the _ coldest spots _ in random gaussian simulations , wmap cs deviates from gaussianity at significant level , and the non - gaussianity of cs exists at all the scales .however , when compared with the spots at the same position in the simulated gaussian maps , we found the significant excesses of local variance and skewness in the large scales , rather than in the small scales .furthermore , we found that the non - gaussianity caused by cs is totally encoded in the wmap low multipoles .these all imply that wmap cs is a large - scale non - gaussian structure , rather than the combination of some small structures .it was claimed by many authors that the cosmic texture with a characteristic scale about , rather than other mechanisms , could provide the excellent explanation for wmap cs . by comparing with the random simulations including the similar texture structure, we found this non - gaussian structure could excellently explain the excesses of the statistics .so our results in this paper strongly support the cosmic texture explanation . in the end of this paper , it is important to mention that the non - gaussianity of wmap cs has been confirmed by the new planck observations on the cmb temperature . in the near future ,the polarization results of planck mission will be released , which would play a crucial role to test the wmap cs , as well as to reveal its physical origin .we are very grateful to the anonymous referee for helpful remarks and comments .we appreciate useful discussions with p. naselsky , j. kim , m. hansen and a.m. frejsel .we acknowledge the use of the legacy archive for microwave background data analysis ( lambda ) .our data analysis made the use of healpix and glesp .this work is supported by nsfc no .11173021 , 11075141 and project of knowledge innovation program of chinese academy of science .
we investigate the local properties of wmap cold spot ( cs ) by defining the local statistics : mean temperature , variance , skewness and kurtosis . we find that , compared with the _ coldest spots _ in random gaussian simulations , wmap cs deviates from gaussianity at significant level . in the meanwhile , when compared with the spots at the same position in the simulated maps , the values of local variance and skewness around cs are all systematically larger in the scale of , which implies that wmap cs is a large - scale non - gaussian structure , rather than a combination of some small structures . this is consistent with the finding that the non - gaussianity of cs is totally encoded in the wmap low multipoles . furthermore , we find that the cosmic texture can excellently explain all the anomalies in these statistics . [ firstpage ] cosmic microwave background
epistasis tends to be prevalent for antimicrobial drug resistance mutations .sign epistasis means that the sign of the effect of a mutation , whether good or bad , depends on background .sign epistasis may be important for treatment strategies , both for antibiotic resistance and hiv drug resistance .for instance , there are sometimes constraints on the order in which resistance mutations occur .a particular resistance mutation may only be selected for in the presence of another resistance mutation .it is important to identify such constraints .a first question is how one can identify pairwise epistasis in a large system .we will discuss entropy and epistasis .information theory has been used for hiv drug resistance mutations and more extensively for analyzing human genetic disease ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?. for recent review articles on epistasis and fitness landscapes see e.g. , and for an empirical perspective .it is well established that genotypes are expected to be in equilibrium proportions if there is no epistasis in the system , i.e. , if fitness is multiplicative . for instance , if two rare mutations have frequencies and , then the frequency of the genotype combining the two mutations is expected to be close to .this statement holds true regardless if recombination occurs or not .we will explore the relation between entropy and epistasis for a system with constraints as described in the introduction . consider a 3-locus balletic system where a mutation at the first locus confers resistance , whereas mutations at the second and third loci are only selected for in the presence of the first mutation ( otherwise they are deleterious ) .we represent the case with a fitness graph ( figure 1 ) . as conventional ,000 denotes the wild - type .for instance , one obtains a system with the fitness graph as in figure 1 for the log - fitness values the gene interactions for a 3-loci system can be described by the sign pattern of 20 circuits , or minimal dependence relations .the relevant two - way interactions in this context be described by the six circuits corresponding to the faces of the 3-cube .specifically , the four inequalities express that there is positive epistasis for the first and second loci , as well as for the first and third loci .the two equalities show that there is no epistasis for the second and third loci , regardless of background .the total 3-way epistasis is zero as well , higher order gene interactions have also been described using walsh coefficients . for this landscape the walsh coefficient , which indicates an absence of background averaged epistasis for the second and third loci .we will consider entropy during the process of adaptation for this landscape .the starting point for adaptation is the wild - type 000 .we use a standard wright - fisher model for an infinite population with mutation rate .the gene frequencies and shared entropy after the given number of generations are listed in the table ..gene frequencies and shared entropy for an infinite population with mutation rate .[ cols="^,^,^,^,^,^,^,^,^ , < " , ] the shared entropy for the second and third loci differs from zero . however , there is no 2-way epistasis for the pair of loci . by extrapolation ,consider an analogous system for -loci .then mutations are selected for only if the first mutation has occurred , but there are no other interactions .one would get non - zero shared entropy for pairs of loci , although there is 2-way epistasis for pairs of loci only .the landscape is closely related to the previous example .indeed , the two - way interactions can be described by the sign pattern and the total 3-way epistasis is zero : also in this case , there is no epistasis for the second and third loci .mutations at the second and third loci are selected for only in the presence of a mutation at the first locus .however , this fitness landscape differs from the previous example in that a mutation at the first locus is neutral for the wild - type .suppose that 50 percent of hosts start a new treatment with 000 viruses , and 50 percent start with the 100 genotype .that could be realistic , for instance if the 100 genotype had some resistance to a previously used drug . by assumption ,eventually one would have about 50 percent 000 genotypes and 50 percent 111 genotype in the total population .then although there is no epistasis for the second and third loci .this example also points at a fundamental problem relating pairwise epistasis and entropy . at the time when we have 50 percent 000 genotypes and 50 percent 111 genotypes ,obviously no method can reveal pairwise epistasis .we will discuss a refined approach for identifying pairwise epistasis .suppose that we have identified shared entropy for a particular pair of loci .let denote the set of loci such that the shared entropy let denote the set of loci with non - zero shared entropy for some locus in , and so forth .let .let denote one of the possible states for , and consider the subsystem of genotypes determined by .if the shared entropy for all , then there is no indication of of epistasis for .we can apply the refined approach for the second and third loci in our example where .then consequently , there is no indication of epistasis for the second and third loci. the described method could be useful for identifying cases with shared entropy and no epistasis .however , it remains to explore to what extent the method is useful in a more general setting .we have demonstrated that shared entropy for two loci does not imply epistasis for the pair .this observation holds true also in the absence of 3-way epistasis in a single environment .entropy based approaches to epistasis are coarse .we have discussed a refined approach which filters out some cases where shared entropy depends on states at other loci .there are obviously other reasons for caution in interpretations of entropy for drug resistance mutations .different drugs constitute different environments .some resistance mutations may be correlated if they are beneficial in the presence of a particular drug , but not for other drugs . in such cases entropy would not not imply epistasis .our results show that observations on entropy and epistasis based on 2-locus systems can be misleading for general systems . from a theoretical point of view , a better understanding of large systems would be useful for handling drug resistance data .let and be discrete random variables with states and .let denote the frequency of , and the frequency for the combination of and .the entropy and the joint entropy are defined as desper , r. , jiang , f. , kallioniemi , o.p . , moch , h. , papadimitriou , c.h . andschffer , a.a .( 1999 ) . inferring tree models for oncogenesis from comparative genome hybridization data .biol _ 6 3751 .goulart , c. p. , mentar , m. , crona , k. , jacobs , s. j. , kallmann , m. , hall , b. g. , greene d. and barlow m. ( 2013 ) . designing antibiotic cycling strategies by determining and understanding local adaptive landscapes ._ plos one _ 8(2 ) : e56040 . doi:10.1371/journal.pone.0056040 .
epistasis is a key concept in the theory of adaptation . indicators of epistasis are of interest for large system where systematic fitness measurements may not be possible . some recent approaches depend on information theory . we show that considering shared entropy for pairs of loci can be misleading . the reason is that shared entropy does not imply epistasis for the pair . this observation holds true also in the absence of higher order epistasis . we discuss a refined approach for identifying pairwise interactions using entropy .
internet connects different routers and servers using different operating systems and transport protocols .this intrinsic heterogeneity of the network added to the unpredictability of human practices make the internet inherently unreliable and its traffic complex .there has been recently major advances in our understanding of the generic aspects of the internet and web structure and development .concerning data transport , most of the studies focus on properties at short time scales or at the level of individual connections , while studies on statistical flow properties at a large scale concentrate essentially on the phase transition to a congested regime . despite of these results , large scale studies of traffic variations in time and spaceare still needed before understanding the new social practices of internet users . in this paper , we study the spatial structure of the large scale flow .we present in part ii the data studied and in parts iii and iv the results of our analysis , showing the existence of a spanning network concentrating the major part of the traffic .finally , in part v we relate the flow properties and its spatial distribution to scientific activity measured by the number of published papers .an important difficulty is to obtain real data measurements of the internet traffic on a global scale . the availability of data of the french network ` renater ' allows us to consider the cartography of internet s traffic and its relation with regional socio - economical factors .the french network ` renater ' has about million users and is constituted of a nation - wide infrastructure and of international links .most of the research , technological , educational or cultural institutions are connected to renater ( fig .[ renater ] ) .this network enables them to communicate between each other , to get access to public or private world - wide research institutes and to be connected to the global internet .we first restrict our analysis to the national traffic and exclude the information exchange with external hosts and routers such as us and europe internet or peering with other isps .this restriction to a small part of the renater traffic ( of roughly gigabytes a day ) has two methodological advantages : first , it ensures that the traffic studied is strictly professional ( mail to non - academics , like family , friends , consultation of newspapers , e - commerce , etc .goes through outer isp and is not taken into account ) ; second , it helps to understand the regional traffic structure and its relation with local economical factors .we believe that the global patterns emerging for the renater network will be relevant for larger structures such as the global internet . the data consist of the real exchange flow ( sum of ftp , telnet , mail , web browsing , etc . ) between all routers , even if there is not a direct ( physical ) link between all of them . for a connection between routers and ( ) , ( in bytes per minutes )is the effective information flow at time going out from to . for technical reasons , data for a few routers were not reliable and we analyzed data for routers which amounts in matrices given for every minutes for a two weeks period ( the quantities are excluded from the present study ) . as an example of the measured time - series , we show ( fig . [ flux ] ) the information flow versus time between two routers located in grenoble and marseille for a nine days period .one can see the different days and within days , bursts of intense activity . in this study, we focus on the flow and not on the growth rate ( and its correlations ) used in a previous study .we now present our empirical results .the time averaged incoming flow at a given router is a measure of the internet activity of the corresponding region ( the over - bar denotes the average over time ) . on the other hand , the average outgoing flow can be interpreted as the total request emanating from other routers .it is thus a measure of the degree of interest produced by this region .we plotted both quantities , versus their rank ( fig .in contrast with many cases observed since the work of zipf , the observed distributions are not power laws but exponentials . this might be the signature of a transient regime and would mean that the internet did nt reach his stationary state , but it is more probably the sign that the internet traffic has a unique , non hierarchical - type structure .this exponential behavior also means that at least in the renater network there are essentially two categories of regions . considering internet activity ( fig .3a ) , one can distinguish active from ( almost ) inactive regions .roughly , there are about eight cities which receives of the total traffic , the rest being ( exponentially ) negligible . concerning the outgoing flow ( fig .3b ) , there are about five most visited regions , the rest being comparatively ` unattractive ' .we checked that for different time windows the order of these cities can slightly change , but the exponential behavior is independent of these seasonal effects .it is interesting to note that the most active and visited regions are not the same showing that each region has its specific activity .regions with a large incoming flow can be classified as active research centers with a great need of information , and regions with a large outgoing flow correspond to important information resources such as e.g. , databases or libraries . at this stage, we have shown that in the renater traffic there is a small number of receivers ( located in active regions ) and emitters ( visited databases ) . however , a further question concerns the secondary routers and the fine structure of flows .indeed , ( and similarly for ) could be a sum of many small contributions coming from many regions , or in contrast there could be only few regions which exchange a significant flow .simple quantities which can characterize the fine structure of the incoming flow at router are the s introduced in another context where is the weight associated with the incoming flow ( and similar expressions for the structure of outgoing flow ) .it is easy to see that , and the first non trivial quantity is .we can illustrate the physical meaning of with simple examples .if all weights are of the same order for all then is very small .in contrast , if one weight is important for example of the order and the others negligible then is of order unity .thus is a measure of the number of important weights .we plot for both the incoming and outgoing flows ( the statistics is over two weeks ) .the result ( fig . [ y2 ] ) shows clearly that the most probable value is and that is larger than ( except for few cases which appear in the histogram ) .this confirms the fact that a few routers are exchanging most of the information , the rest of the network being negligible .in order to illustrate the above results , we construct the network connecting a number of routers and carrying the maximal flow denoted by .we increase from to and we obtain the result plotted in fig .( [ spanning]a ) .it appears clearly that a small fraction of links carry most of the flow .this behavior is encoded in the fact that is a power law for with an exponent smaller than one ( ) , so that a small variation of the number of connections leads to a large variation of the transported flow .this analysis completes the renater traffic map : there is a small number of receivers and emitters exchanging significant information between them , the rest of the network being exponentially negligible .this demonstrates the existence of a ` spanning network ' carrying most of the traffic and connecting the main emitters to the main receivers . in order to visualize this spanning network on the french map , we filter the flow with the following procedure .we first select flows above a certain threshold , and then we select a connection only if the corresponding flow represents a large percentage of ( i ) the outgoing flow from and ( ii ) the incoming flow in .the result is shown on fig .[ spanning]b .we checked that the instantaneous ( or averaged over a different time window ) spanning network is eventually different , but always with the same characteristics ( small number of interconnected emitters and receivers ) .the procedure described above could thus be used as a simple filter in order to visualize in real - time a complex flow matrices .so far , we have studied statistical properties of the traffic , but an important point is to relate them to economical or social factors .the internet activity should in principle be related to social pointers such as the number of inhabitants , the number of students , and so on . from our data , the indicator which shows the best correlation with internet activity is scientific activity measured by the number of published papers .one can expect that the more a scientist consults books or data , the more he / she will publish . this principle ,` the more you read , the more you write ' , although commonly accepted in a number of historical cases , is difficult to evaluate quantitatively .the main difficulty being the measure of the amount of information gathered by scientists in libraries . in the case of internet, the information gathered by scientists working in a given region can be estimated by the average incoming flow in the corresponding router .since the information needed for a scientist is usually scattered world - wide , it is important here to take into account the total incoming flow , including exchanges with international hosts .we thus compare the total average incoming flow ( per scientist ) with the average number of papers published ( per scientist ) per year by the region s universities ( obtained from the sci database ) . as a representative panel ,we choose to use data about papers published only by scientists in the national research institution ( cnrs ) .we represent these data on fig .[ flux.papers ] .this plot shows that the average incoming flow per scientist in a region is increasing with the number of published scientific papers per scientist by this region s laboratories roughly as a power law with exponent .this result confirms quantitatively the intuitive principle stated above and is particularly interesting from the point of view of the web s social impact .indeed , it implies that the number of publications is growing with the incoming flow as a power law with exponent : the more one uses internet , the more one publishes !this result indicates that on average the use of internet has a positive impact on research productivity .in summary , we have shown that the major part of the traffic takes place only between a few routers while the rest of the network is almost negligible .we have proposed a simple procedure to extract this ( bipartite ) spanning network , which could have some implications in visualization and monitoring of real - time traffic .in addition , resources allocation and capacity planning tasks could benefit from the knowledge of such a spanning network .these results point towards new ways of understanding and describing real - world traffic .in particular , any microscopic model should recover these statistical properties and our results provide a quantitative basis for modeling the dynamics of information flow .we also have shown that the scientific activity of a region is increasing with its internet activity .this indicates that it is difficult for a scientist to avoid the use of internet without affecting his / her productivity measured in terms of publications .this result also demonstrates that in addition to increase people s social capital the internet has a measurable positive impact on research production .more generally , it underlines the importance of internet as knowledge sharing vector .this study also suggests that the internet activity could be used as an interesting new socio - economical pointer well adapted to the information society . finally , these results exhibit some global statistical patterns shedding light on the relations between the internet and economical factors .it shows that in addition to the structural complexity of the web and the internet , the traffic has its own complexity with its own cartography .
the internet infrastructure is not virtual : its distribution is dictated by social , geographical , economical , or political constraints . however , the infrastructure s design does not determine entirely the information traffic and different sources of complexity such as the intrinsic heterogeneity of the network or human practices have to be taken into account . in order to manage the internet expansion , plan new connections or optimize the existing ones , it is thus critical to understand correlations between emergent global statistical patterns of internet activity and human factors . we analyze data from the french national ` renater ' network which has about millions users and which consists in about interconnected routers located in different regions of france and we report the following results . the internet flow is strongly localized : most of the traffic takes place on a ` spanning ' network connecting a small number of routers which can be classified either as ` active centers ' looking for information or ` databases ' providing information . we also show that the internet activity of a region increases with the number of published papers by laboratories of that region , demonstrating the positive impact of the web on scientific activity and illustrating quantitatively the adage ` the more you read , the more you write ' . 2
although we have reached a situation in computational linguistics where large coverage grammars are well developed and available in several formal traditions , the use of these research results in actual applications and for application to specific domains is still unsatisfactory .one reason for this is that large - scale grammar specifications incur a seemingly unnecessarily large burden of space and processing time that often does not stand in relation to the simplicity of the particular task .the usual alternatives for natural language generation to date have been the handcrafted development of application or sublanguage specific grammars or the use of template based generation grammars . in approaches are combined resulting in a practical small generation grammar tool .but still the grammars are handwritten or , if extracted from large grammars , must be adapted by hand . in general , both the template and the handwritten application grammar approach compromise the idea of a general nlp system architecture with reusable bodies of general linguistic resources .we argue that this customization bottleneck can be overcome by the automatic extraction of application - tuned consistent generation subgrammars from proved given large - scale grammars . in this paperwe present such an automatic subgrammar extraction tool .the underlying procedure is valid for grammars written in typed unification formalisms ; it is here carried out for systemic grammars within the development environment for text generation kpml .the input is a set of semantic specifications covering the intended application .this can either be provided by generating a predefined test suite or be automatically produced by running the particular application during a training phase .the paper is structured as follows .first , an algorithm for automatic subgrammar extraction for arbitrary systemic grammars will be given , and second the application of the algorithm for generation in the domain of ` encyclopedia entries ' will be illustrated .to conclude , we discuss several issues raised by the work described , including its relevance for typed unification based grammar descriptions and the possibilities for further improvements in generation time .systemic functional grammar ( sfg ) is based on the assumption that the differentiation of syntactic phenomena is always determined by its function in the communicative context .this functional orientation has lead to the creation of detailed linguistic resources that are characterized by an integrated treatment of content - related , textual and pragmatic aspects .computational instances of systemic grammar are successfully employed in some of the largest and most influential text generation projects such as , for example , penman , communal , techdoc , drafter , and gist . for our present purposes , however , it is the formal characteristics of systemic grammar and its implementations that are more important .systemic grammar assumes multifunctional constituent structures representable as feature structures with coreferences . as shown in the following function structure example for the sentence `` the people that buy silver love it . '' , different functions can be filled by one and the same constituent : + \\ \mbox{thing : } & \mbox{\footnotesize \em noun } \left [ \begin{array}{l } \mbox{spelling : } \mbox{``people '' } \end{array } \right ] \\ \mbox{qualifier : } & \mbox{\footnotesize \em dependent - clause } \\ & \left [ \begin{array}{l } \mbox{spelling : } \\\mbox{\ \ \ `` that buy silver '' } \end{array } \right ] \end{array } \right ] \\ \mbox{process:}\ \ \ \mbox{\footnotesize \em finite } \left [ \begin{array}{ll } \mbox{spelling : } & \mbox{``love '' } \end{array } \right ] \\ \mbox{phenomenon : } \ \ \\parbox[t]{2in}{ } \\ \mbox{subject : } \ \ \ \fbox{\footnotesize 1 } \\[0.1 in ] \mbox{theme : } \ \ \ \fbox{\footnotesize 1 } \\\mbox{directcomplement:}\ \ \\fbox{\footnotesize 2 } \\\end{array } \right]\ ] ] given the notational equivalence of hpsg and systemic grammar first mentioned by and , and further elaborated in , one can characterize a systemic grammar as a large type hierarchy with multiple ( conjunctive and disjunctive ) and multi - dimensional inheritance with an open - world semantics .the basic element of a systemic grammar a so - called _system_is a type axiom of the form ( adopting the notation of cuf ) : .... entry = type_1 | type_2 | ... | type_n . .... where to are exhaustive and disjoint subtypes of type . not necessarily be a single type ; it can be a logical expression over types formed with the connectors and and or .a systemic grammar therefore resembles more a type lattice than a type hierarchy in the hpsg tradition . in systemic grammar ,these basic type axioms , the systems , are named ; we will use to denote the left - hand side of some named system , and to denote the set of subtypes \{} the output of the system .the following type axioms taken from the large systemic english grammar nigel shall illustrate the nature of systems in a systemic grammar : .... nominal_group = class_name | individual_name .nominal_group = wh_nominal | nonwh_nominal .( or class_name wh_nominal ) = singular | plural . ....the meaning of these type axioms is fairly obvious : nominal groups can be subcategorized in class - names and individual - names on the one hand , they can be subcategorized with respect to their wh - containment into wh - containing nominal - groups and nominal - groups without wh - element on the other hand .the singular / plural opposition is valid for class - names as well as for wh - containing nominal groups ( be they class or individual names ) , but not for individual - names without wh - element .systemic types inherit constraints with respect to appropriate features , their filler types , coreferences and order . hereare the constraints for some of the types defined above : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [ thing : _ noun _ ] + _ class - name _ [ thing : _ common - noun _ , + deictic : _ top _ ] + _ individual - name _ [ thing : _ proper - noun _ ] + _ wh - nominal _ [ wh : _ top _ ] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ universal principles and rules are in systemic grammar not factored out .the lexicon contains stem forms and has a detailed word class type hierarchy at its top .morphology is also organized as a monotonic type hierarchy .currently used implementations of sfg are the penman system , the kpml system and wag - krl .our subgrammar extraction has been applied and tested in the context of the kpml environment .kpml adopts the processing strategy of the penman system and so it is necessary to briefly describe this strategy .penman performs a semantic driven top - down traversal through the grammatical type hierarchy for every constituent .passed types are collected and their feature constraints are unified to build a resulting feature structure .substructure generation requires an additional grammar traversal controlled by the feature values given in the superstructure .in addition to the grammar in its original sense , the penman system provides a particular interface between grammar and semantics .this interface is organized with the help of so - called _choosers_these are decision trees associated with each system of the grammar which control the selection of an appropriate subtype during traversal .choosers should be seen as a practical means of enabling applications ( including text planners ) to interact with the grammar using purely semantic specifications even though a fully specified semantic theory may not yet be available for certain important areas necessary for coherent , fluent text generation .they also serve to enforce deterministic choice an important property for practical generation ( cf . ) .the basic form of a chooser node is as follows ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ( * ask * _ query _ + ( _ answer1 actions _ ) + ( _ answer2 actions _ ) + ... ) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the nodes in a chooser are queries to the semantics , the branches contain a set of actions including embedded queries .possible chooser actions are the following : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ( * ask * _ query _ ( .. ) ...( .. ) ) + ( * choose * _ type _ ) + ( * identify * _ function concept _ ) + ( * copyhub * _ function1 function2 _ ) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a choose action of a chooser explicitly ( * choose * _ type _ ) selects one of the output types of its associated system . in general, there can be several paths through a given chooser that lead to the selection of a single grammatical type : each such path corresponds to a particular configuration of semantic properties sufficient to motivate the grammatical type selected .besides this , choosers serve to create a binding between given semantic objects and grammatical constituents to be generated .this is performed by the action ( * identify * _ function _ _ concept _ ) . because of the multifunctionality assumed for the constituent structure in systemic grammar , two grammatical functions can be realized by one and the same constituent with one and the same underlying semantics .the action ( * copyhub * _function1 function2 _ ) is responsible for identifying the semantics of both grammatical functions .within such a framework , the first stage of subgrammar extraction is to ascertain a representative set of grammatical types covering the texts for the intended application .this can be obtained by running the text generation system within the application with the full unconstrained grammar .all grammatical types used during this training stage are collected to form the backbone for the subgrammar to be extracted .we call this cumulative type set the _goal - types_. the list of _ goal - types _ then gives the point of departure for the second stage , the automatic extraction of a consistent subgrammar ._ goal - types _is used as a filter against which systems ( type axioms ) are tested .types not in _ goal - types _ have to be excised from the subgrammar being extracted .this is carried out for the entries of the systems in a preparatory step .we assume that the entries are given in disjunctive normal form .first , every conjunction containing a type which is not in _ goal - types _ is removed .after this deletion of unsatisfiable conjunctions , every type in an entry which is not in _ goal - types _ is removed .the restriction of the outputs of every system to the _ goal - types _ is done during a simulated depth - first traversal through the entire grammatical type lattice .the procedure works on the type lattice with the revised entries . starting with the most general type _start _ ( and the most general system called _ rank _ which is the system with _ start _ as entry ) , a hierarchy traversal looks for systems which although restricted to the type set _ goal - types _ actually branch , i.e. have more than one type in their output .these systems constitute the new subgrammar .in essence , each grammatical system is examined to see how many of its possible subtypes in are used within the target grammar .those types which are not used are excised from the subgrammar being extracted .more specific types that are dependent on any excised types are not considered further during the traversal .grammatical systems where there is only a single remaining unexcised subtype collapse to form a degenerated pseudo - system indicating that no grammatical variation is possible in the considered application domain .for example , in the application described in section 3 the system indicative = declarative interrogative .collapses into indicative = declarative . because questions do not occur in the application domain .pseudo - systems of this kind are not kept in the subgrammar .the types on their right - hand side ( pseudotypes ) are excised accordingly , although they are used for deeper traversal , thus defining a path to more specific systems .such a path can consist of more than one pseudotype , if the repeated traversal steps find further degenerated systems .constraints defined for pseudo - types are raised , chooser actions are percolated down i.e . , more precisely , constraints belonging to a pseudo - type are unified with the constraints of the most general not pseudo type at the beginning of the path .chooser actions from systems on the path are collected and extend the chooser associated with the final ( and first not pseudo ) system of the path .however , in the case that a maximal type is reached which is not in _ goal - types _ , chooser actions have to be raised too .the number of _ goal - types _ is then usually larger than the number of the types in the extracted subgrammar because all pseudotypes in _ goal - types _ are excised . as the recursion criteria in the traversal, we first simply look for a system which has the actual type in its revised entry regardless of the fact if it occurs in a conjunction or not .this on its own , however , oversimplifies the real logical relations between the types and would create an inconsistent subgrammar .the problem is the conjunctive inheritance .if the current type occurs in an entry of another system where it is conjunctively bound , a deeper traversal is in fact only licensed if the other types of the conjunctions are chosen as well . in order to perform such a traversal , a breadth traversal with compilation of all crowns of the lattice ( see )would be necessary . in order to avoid this potentially computationally very expensive operation , but not to give up the consistency of the subgrammar , the implemented subgrammar extraction procedure sketched in figure [ alg ]maintains all systems with complex entries ( be they conjunctive or disjunctive ) for the subgrammar even if they do not really branch and collapse to a single - subtype system .a related approach can be found in for the extraction of smaller systemic subgrammars for analysis .if the lexicon is organized as or under a complex type hierarchy , the extraction of an application - tuned lexicon is carried out similarly .this has the effect that closed class words are removed from the lexicon if they are not covered in the application domain .open class words belonging to word classes not covered by the subgrammar type set are removed .some applications do not need their own lexicon for open class words because they can be linked to an externally provided domain - specific thesaurus ( as is the case for the examples discussed below ) . in this case , a sublexicon extraction is not necessary .the first trial application of the automatic subgrammar extraction tool has been carried out for an information system with an output component that generates integrated text and graphics .this information system has been developed for the domain of art history and is capable of providing short biography articles for around 10 000 artists .the underlying knowledge base , comprising half a million semantic concepts , includes automatically extracted information from 14 000 encyclopedia articles from mcmillans planned publication `` dictionary of art '' combined with several additional information sources such as the getty `` art and architecture thesaurus '' ; the application is described in detail in . as inputthe user clicks on an artist name .the system then performs content selection , text planning , text and diagram generation and page layout automatically .possible output languages are english and german .the grammar necessary for short biographical articles is , however , naturally much more constrained than that supported by general broad - coverage grammars .there are two main reasons for this : first , because of the relatively fixed text type `` encyclopedia biography '' involved , and second , particularly in the example information system , because of the relatively simple nature of the knowledge base this does not support more sophisticated text generation as might appear in full encyclopedia articles . without extensive empirical analysis, one can already state that such a grammar is restricted to main clauses , only coordinative complex clauses , and temporal and spatial prepositional phrases .it would probably be possible to produce the generated texts with relatively complex templates and aggregation heuristics : but the full grammars for english and german available in kpml already covered the required linguistic phenomena .the application of the automatic subgrammar extraction tool to this scenario is as follows . ' '' '' ( 210,140 ) ( 0,0)(210,140)(30,10)[0,120 ] ( 0,-120 ) example texts : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ roger hilton was an english painter .he was born at northwood on 23 march 1911 , and he died at botallack on 23 february 1975 .he studied at slade school in 1929 - 1931 .he created `` february - march 1954 '' , `` grey figure '' , `` oi yoi yoi '' and `` june 1953 ( deep cadmium ) '' ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ anni albers is american , and she is a textile designer , a draughtsman and a printmaker .she was born in berlin on 12 june 1899 .she studied art in 1916 - 1919 with brandenburg . also , she studied art at thekunstgewerbeschule in hamburg in 1919 - 1920 and the bauhaus at weimar and dessau in 1922 - 1925 and 1925 - 1929 . in 1933she settled in the usa . in 1933 - 1949she taught at black mountain college in north carolina ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ' '' '' in the training phase , the information system runs with the full generation grammar .all grammatical types used during this stage are collected to yield the cumulative type set _ goal - types_. how many text examples must be generated in this phase depends on the relative increase of new information ( occurrence of new types ) obtained with every additional sentence generated .we show here the results for two related text types : ` short artist biographies ' and ` artist biography notes ' .figure [ longer - stats ] shows the growth curve for the type set ( vertical axis ) with each additional semantic specification passed from the text planner to the sentence generator ( horizontal axis ) for the first of these text types .the graph shows the cumulative type usage for the first 90 biographies generated , involving some 230 sentences .the subgrammar extraction for the `` short artist biographies '' text type can therefore be performed with respect to the 246 types that are required by the generated texts , applying the algorithm described above .the resulting extracted subgrammar is a type lattice with only 144 types .the size of the extracted subgrammar is only 11% of that of the original grammar .run times for sentence generation with this extracted grammar typically range from 55%75% of that of the full grammar ( see table [ runtime])in most cases , therefore , less than one second with the regular kpml generation environment ( i.e. , unoptimized with full debugging facilities resident ) . ' '' '' [ cols= " < , < , < , < , < " , ] ( under allegro common lisp running on a sparc10 . ) ' '' ''the generation times are indicative of the style of generation implemented by kpml .clause types with more subtypes are likely to cause longer processing times than those with fewer subtypes .when there are in any case fewer subtypes available in the full grammar ( as in the existential shown in table [ runtime ] ) , then there will be a less noticeable improvement compared with the extracted grammar .in addition , the run times reflect the fact that the number of queries being asked by choosers has not yet been maximally reduced in the current evaluation . noting the cumulative set of inquiry responses during the training phase would provide sufficient information for more effective pruning of the extracted choosers . ' '' '' ( 210,100 ) ( 0,0)(210,100)(30,10)[0,120 ] ( 0,-120 ) example text : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ nathan drake was an english painter .he was born at lincoln in 1728 , and he died at york on 19 february 1778 . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ' '' '' the second example shows similar improvements .the very short biography entry is appropriate more for figure headings , margin notes , etc .the cumulative type use graph is shown in figure [ eg2 ] . with this ` smaller ' text type , the cumulative use stabilizes very quickly ( i.e. , after 39 sentences ) at 205 types .this remained stable for a test set of 500 sentences . extracting the corresponding subgrammar yields a grammar involving only 101 types , which is 7% of the original grammar .sentence generation time is accordingly faster , ranging from 40%60% of that of the full grammar . in both cases ,it is clear that the size of the resulting subgrammar is dramatically reduced .the generation run - time is cut to 2/3 .the run - time space requirements are cut similarly .the processing time for subgrammar extraction is less than one minute , and is therefore not a significant issue for improvement .in this paper , we have described how generation resources for restricted applications can be developed drawing on large - scale general generation grammars .this enables both re - use of those resources and progressive growth as new applications are met .the grammar extraction tool then makes it a simple task to extract from the large - scale resources specially tuned subgrammars for particular applications .our approach shows some similarities to that proposed by for improving parsing performance by grammar pruning and specialization with respect to a training corpus .rule components are ` chunked ' and pruned when they are unlikely to contribute to a successful parse . herewe have shown how improvements in generation performance can be achieved for generation grammars by removing parts of the grammar specification that are not used in some particular sublanguage .the extracted grammar is generally known to cover the target sublanguage and so there is no loss of required coverage . another motivation for this work is the need for smaller , but not toy - sized , systemic grammars for their experimental compilation into state - of - the - art feature logics .the ready access to consistent subgrammars of arbitrary size given with the automatic subgrammar extraction reported here allows us to investigate further the size to which feature logic representations of systemic grammar can grow while remaining practically usable .the compilation of the full grammar nigel has so far only proved possible for cuf ( see ) , and the resulting type deduction runs too slowly for practical applications .it is likely that further improvements in generation performance will be achieved when both the grammatical structures and the extracted choosers are pruned .the current results have focused primarily on the improvements brought by reconfiguring the type lattice that defines the grammar .the structures generated are still the ` full ' grammatical structures that are produced by the corresponding full grammar : if , however , certain constituent descriptions are always unified ( conflated in systemic terminology ) then , analogously to , they are candidates for replacement by a single constituent description in the extracted subgrammar .moreover , the extracted choosers can also be pruned directly with respect to the sublanguage .currently the pruning carried out is only that entailed by the type lattice , it is also possible however to maintain a record of the classificatory inquiry responses that are used in a subgrammar : responses that do not occur can then motivate further reductions in the choosers that are kept in the extracted grammar .evaluation of the improvements in performance that these strategies bring are in progress .one possible benefit of not pruning the chooser decision trees completely is to provide a fall - back position for when the input to the generation component in fact strays outside of that expected by the targetted subgrammar .paths in the chooser decision tree that do not correspond to types in the subgrammar can be maintained and marked explicitly as ` out of bounds ' for that subgrammar .this provides a semantic check that the semantic inputs to the generator remain within the limits inherent in the extracted subgrammar .if it sufficiently clear that these limits will be adhered to , then further extraction will be free of problems .however if the demands of an application change over time , then it is also possible to use the semantic checks to trigger regeneration with the full grammar : this offers improved average throughput while maintaining complete generation .noting exceptions can also be used to trigger new subgrammar extractions to adapt to the new applications demands .a number of strategies therefore present themselves for incorporating grammar extraction into the application development cycle .although we have focused here on run - time improvements , it is clear that the grammar extraction tool has other possible uses .for example , the existence of small grammars is one important contribution to providing teaching materials .also , the ability to extract consistent subcomponents should make it more straightforward to combine grammar fragments as required for particular needs .further validation in both areas forms part of our ongoing research .moreover , a significantly larger reduction of the type lattice can be expected by starting not from the cumulative set of goal - types for the grammar reduction , but from a detailed protocol of jointly used types for every generated sentence of the training corpus .a clustering technique applied to such a protocol is under development .finally , the proposed procedure is not bound to systemic grammar and can also be used to extract common typed unification subgrammars . here, however , the gain will probably not be as remarkable as in systemic grammar .the universal principles of , for example , an hpsg can not be excised .hpsg type hierarchies usually contain mainly general types , so that they will not be affected substantially . in the end , the degree of improvement achieved depends on the extent to which a grammar explicitly includes in its type hierarchy distinctions that are fine enough to vary depending on text type .robin p. fawcett and gordon h. tucker .demonstration of genesys : a very large , semantically based systemic functional grammar . in _ 13th .international conference on computational linguistics ( coling-90 ) _ , volume i , pages 47 49 , helsinki , finland .thomas kamps , christoph hser , wiebke mhr , and ingrid schmidt .knowledge - based information acess for hypermedia reference works : exploring the spread of the bauhaus movement . in maristella agosti and alan f. smeaton , editors ,_ information retrieval and hypertext _ , pages 225255 .kluwer academic publishers , boston / london / dordrecht .christian m.i.m . matthiessen .1983 . systemic grammar in computation : the nigel case . in _ proceedings of the first annual conference of the european chapter of the association for computational linguistics_. elena not and oliviero stockautomatic generation of instructions for citizens in a multilingual community . in _ proceedings of the european language engineering convention _ ,paris , france , july .michael odonnell .technical report , fujitsu limited , tokyo , japan . ( internal report of project carried out at fujitsu australia ltd . ,sydney , project leader : guenter plum , document engineering centre ) .ehud reiter .1994 . has a consensus nl generation architecture appeared , and is it psychologically plausible ? in _ proceedings of the 7th . international workshop on natural language generation ( inlgw 94 )_ , pages 163170 , kennebunkport , maine .dietmar rsner and manfred stede .generating multilingual documents from a knowledge base : the techdoc project . in _ proceedings of the 15th .international conference on computational linguistics ( coling 94 ) _ , volume i , pages 339 346 , kyoto , japan .
the space and run - time requirements of broad coverage grammars appear for many applications unreasonably large in relation to the relative simplicity of the task at hand . on the other hand , handcrafted development of application - dependent grammars is in danger of duplicating work which is then difficult to re - use in other contexts of application . to overcome this problem , we present in this paper a procedure for the automatic extraction of application - tuned consistent subgrammars from proved large - scale generation grammars . the procedure has been implemented for large - scale systemic grammars and builds on the formal equivalence between systemic grammars and typed unification based grammars . its evaluation for the generation of encyclopedia entries is described , and directions of future development , applicability , and extensions are discussed .
since its origin , it has been known that quantum mechanics ( qm ) has significant structural differences compared to classical mechanics ( cm ) . for instance , quantum observables , contrary to classical ones , do not necessarily commute and , therefore , are not necessarily ( experimentally ) compatible . in other terms , the algebra of quantum observables , contrary to the algebra of classical ones , is non - commutative .also , in cm propositions ( i.e. , the statements about the properties of a physical system ) are either true or false , and can coherently be combined by means of the disjunction ( or ) and conjunction ( and ) logical operations , giving rise to a propositional boolean algebra for which the distributivity property between the or and and operations holds .on the other hand , not only quantum propositions are not in general _ a priori _ either true or false , but also have the tendency to violate the distributivity law ; hence , they do not form a boolean algebra .another important structural difference between qm and cm is the fact that the probability model describing a classical system is kolmogorovian ( i.e. , it obeys kolmogorov s axioms of classical probability theory ) whereas the one describing a quantum one is not .these deep structural differences between qm and cm have certainly contributed to the consolidation of the preconception that qm , contrary to cm , can not be understood , as exemplified in a feynman s celebrated quote : `` there was a time when the newspapers said that only twelve men understood the theory of relativity .i do not believe that there ever was such a time .there might have been a time when only one man did , because he was the only guy who caught on , before he wrote his paper .but after people read the paper a lot of people understood the theory of relativity in some way or other , certainly more than twelve . on the other hand ,i think i can safely say that nobody understands quantum mechanics . ''one of the purposes of this paper is to show that this preconception is unfounded , as we dispose today of a very clear explanation of the origin of quantum structures . such an explanation is contained in aerts _ creation - discovery view _ and , more specifically , in his _ hidden - measurement approach _ .aert s explanatory framework has been substantiated , over the years , by a number of amazing machine - models .these are conventional macroscopic mechanical objects , like those we encounter in our everyday life , that , surprisingly , are able to reproduce not only the strange behavior of pure quantum systems , but also the behavior of more general intermediate structures , which are neither quantum nor classical , but truly intermediate . andsince the functioning of these machine - models is fully under our eyes , one can today confidently say , in contrast to feynman s admonition , that much of the quantum mystery has been in fact unveiled . among the most important machine examples invented by aerts, we can cite his `` connected vessels of water '' model , which can reproduce epr non - local correlations and violate bell s inequalities , and his -model , describing a point particle on which experiments ( measurements ) are performed in a very particular way , by exploiting the breakability of peculiar elastic bands ( more will be said about it later in the article ) . in this model , is a continuous parameter that can be varied from to . in the limit ,the system becomes purely classical , with the outcomes of the measurements that are _ a priori _ determined by the state of the entity . on the other hand , in the limit , the system becomes purely quantum , and is structurally equivalent to the spin of a spin- quantum `` particle , '' reproducing the same transition probabilities that are obtained in a typical stern - gerlach experiment . and ,for the intermediate values , the system exhibits interesting intermediate structures , which can not be modelized by a classical phase - space or a quantum hilbert space -model has also been successfully applied , as a mathematical tool ( in conjunction with the notion of _ contextual risk _ ) to model the ambiguity appearing in so - called ellsberg paradox , in decision theory and experimental economics . ] . in this paperwe want to follow aerts great tradition of inventing macroscopic models that are able to reproduce the behavior of quantum entities , and beyond .more precisely , in the spirit of the above mentioned -model , we will introduce a new machine - model , which also depends on a parameter : an integer that can be varied from to a given maximal value . for ( maximal value )the system can be shown to reproduce the transmission and reflection probabilities of a classical scattering process , whereas for ( minimal value ) it reproduces the transmission and reflection probabilities of a quantum scattering entity interacting with a dirac -function potential . and ,for the intermediate values , the machine delivers transmission and reflection probabilities which , in general , can not be classified as classical or quantum , being truly intermediate . to this end , in sec .[ creation - discovery ] , we start by introducing , in a didactical way , the general explanatory framework of aerts _ creation - discovery view _ and _ hidden - measurement approach _ , providing the conceptual language that will allow us to understand the origin of the structural difference between classical , quantum and ( quantum - like ) intermediate systems . in sec . [ scattering ] , we use this language to explain the physical content of transmission and reflection probabilities in one - dimensional quantum scattering systems , and in sec .[ delta - scattering ] we explicitly calculate them for the simple case of a delta - function potential .then , in secs . [ dirac quantum machine ] , we present in detail the design and functioning of the -quantum machine and show that it reproduces the transmission and reflection probabilities associated to a delta - function potential . in sec .[ model ] , we generalize the functioning of the -quantum machine in what we call the -model , and show that in the and limit situations it reproduces the quantum and classical probabilities , respectively , whereas for intermediate values of it can describe processes which are neither classical nor quantum , but truly intermediate . in sec .[ comparison ] , we highlight the main differences between aerts -model which we also describe in some details and our -model , and in sec .[ potentiality ] we use the powerful metaphor of the latter to deepen our understanding of the behavior of quantum ( and quantum - like ) entities , particularly for what concerns their property of being able to switch from actual to potential modes of being .this will take us , in sec .[ weak ] , to the introduction of the new concept of _ process - actuality _ of a property , that we use to define the related concepts of process - existence and process - macroscopic wholeness .thanks to these definitions , we will be in a position , in sec .[ non - spatiality ] , to give a precise definition of the important notion of _ non - spatiality _ , which we show is to be understood not as absence of spatiality , but as existence in an intermediary physical space .finally , in sec . [ conclusion ] , we conclude our work by providing some final remarks .the conceptual language we present in this section is mainly the result of the work of two physicists : constantin piron ( particularly for what concerns the concept of _ experimental project _ , the definition of the _ state _ of an _ entity _ and the precise characterization of the so - called _ classical prejudice _ ) and diederik aerts ( particularly for his deep analysis of the structural differences between classical and quantum systems , in relation to the various changes an entity can undergo in a measurement process , and the corresponding distinction between classical and quantum probabilities , as expressed in his _ creation - discovery view _ and , more specifically , in his _ hidden measurement approach _ ) .the subtlety and richness of the concepts presented in this section would require many more pages of explanation and analysis , also of a mathematical nature .however , the rather succinct and intuitive presentation of this section will certainly suffice for the goal of this article , and we refer the interested reader to the papers of piron and aerts that we have mentioned in the introduction . *entity*. a physicist investigation starts when he ( she ) focus his ( her ) attention on some specific phenomena , happening in his ( her ) reality , neglecting some others . to these ensembles of phenomena , which emerge from the others, he ( she ) can give specific names , and attach properties .in other words , a scientist investigating reality will use his ( her ) analytical skills to conceptually separate parts of reality having specific sets of properties .these parts are called _ entities_. an entity is not necessarily a spatial phenomenon , as it can also refer to mathematical , mental , conceptual aspects of our reality , and many other as well . in other terms , an entity is just an element ( not necessarily elementary ) of our total reality to which , in our role of participative observers , we are able to attribute specific properties . *property*. generally speaking , a property is something an entity has independently of the type of context it is confronted with .properties can either be _ actual _ or _potential_. if they are actual , it means that the outcomes of those _ tests _ which are used to ( operationally ) define them can be predicted , at least in principle , with certainty . on the other hand ,if they are potential , it means that such outcomes can not be predicted with certainty , not even in principle . * experimental project*. the tests that are used to operationally define an entity s properties ( and deduce their actuality or potentiality ) are _ experimental projects _ whose outcomes lead to a well - defined `` yes - no '' alternative .they require the specification of : the measuring apparatus to be used , the operations to be performed , and the rule to be applied to unambiguously interpret the results of the experiment in terms of the ( mutually excluding ) `` yes '' ( successful ) and `` no '' ( unsuccessful ) alternatives .* state*. by definition , the _ state _ of an entity is the set of all its actual properties , i.e. , the collection of all properties that are actual for an entity in a given moment . andsince with time some actual properties become potential , whereas some other potential properties become actual , this means that the state of an entity , in general , changes ( i.e. , it evolves ) . in other words, what one can state about an entity in a given moment is different from what one can state about the same entity in the following moment .however , not all properties of an entity will change with time : some of them , usually called _ intrinsic properties _ , or _ attributes _ , are more stable , and are usually used to characterize the entity s _ identity _ , and when they cease to be actual one says that the entity has been destroyed ( or partially destroyed ) .* classical prejudice*. being the state the collection of all properties that are actual in a given moment , it s clear that once we know the state of an entity we know , by definition , all it can be said with certainty about it , in that moment .this may lead one to believe that , accordingly , the outcome of whatever test we can perform on the entity is in principle predictable with certainty .such an auua ( _ additional unconsciously used assumption _ , as aerts likes to call them ; see ) is usually referred to as the _ classical prejudice _ : a preconceived idea that was long believed by physicists , but in the end has been falsified by the quantum revolution . * lack of knowledge*. in the description of an entity , we have to distinguish two kinds of _ lack of knowledge_. the first kind is related to our possible incomplete knowledge of the state of the entity , whereas the second kind , much more subtle , is related to our ignorance about the specific interactions arising between the entity and its context , and in particular the experimental testing apparatus .* classical and quantum probabilities*. every time a scientist is in a situation of lack of knowledge , the best he can do is to formulate probabilistic predictions about the outcome of his experimental projects .different typologies of lack of knowledge will produce different probabilities ._ classical probabilities _ ( obeying kolmogorov s axioms ) correspond to situations where the lack of knowledge is only about the state of the entity .quantum probabilities ( not obeying kolmogorov s axioms ) correspond to situations where there is a full knowledge of the state of the entity , but a maximum lack of knowledge about the interaction between the measurement apparatus and the entity . in between these two extremes , one finds intermediate pictures , giving rise to intermediate probabilities which can be neither fitted into a quantum probability model , nor into a classical probability model . * hidden measurements*. the origin of the structural differences between quantum and classical entities can be more easily understood by introducing the important concept of _ hidden measurement _( by measurement we mean here an experiment testing a specific property , or set of properties ). in general , measurements are not just observations without effects , as they can provoke real changes in the state of the entity .however , as we usually lack knowledge about the reality of what exactly happens during a measurement process , its outcomes can only be predicted in probabilistic terms .this can be modelized by assuming that to a given ( indeterministic ) measurement are associated a collection of `` hidden '' _ deterministic _measurements , and that when the measurement is performed ( on an entity in a given state ) , one of these hidden measurements does actually take place . in other terms , quantum ( or quantum - like ) probabilities find their origin in our lack of knowledge about which one of these hidden ( deterministic ) measurements effectively take place . *creation and discovery*. according to the above , classical probabilities express our lack of knowledge about the state of an entity , i.e. , about the properties that are already present ( i.e. , actual ) before doing or even deciding doing an experiment .in other words , classical probabilities are about our possibility to _ discover _ something that is already there .quantum ( or quantum - like ) probabilities , on the other hand , express our lack of knowledge about properties that do not exist before the experiment ( i.e. , are only potential ) , but are literally _ created _( i.e. , actualized ) by means of the experiment .in other terms , the distinction between classical and quantum probabilities would be just a distinction between discovering what is already there and creating what is still not there , by means of an experiment ( i.e. , a measurement process ) .* soft and hard acts of creations*. we conclude this telegraphic presentation by also mentioning the distinction between _ soft _ and _ hard _ creations , as considered by coecke . a _ soft creation _is a ( unitary ) structure preserving process that does nt alter the set of states of an entity , but only the relative actuality and potentiality of its set of properties . on the other hand , a _hard creation _ is a process that has the power to alter the set of states of an entity .however , considering that an entity can also be defined by its attribute of having a given set of states , we can say that a hard act of creation is a process that destroys ( or partially destroys ) the original entity s identity , which therefore disappears from our sight , and creates new entities , suddenly appearing to our sight ( or the `` sight '' of our instruments ) ._ : a soft act of creation can also be understood as a composite process constituted by a succession of hard acts of creation , whose overall effect results in the restoration of the entity s original identity .using the conceptual language we have introduced in the previous section , we shall now describe a typical one - dimensional quantum scattering process and the associated transmission and reflection probabilities -direction are greater than the -extension of the potential , thus allowing a separation of the three - dimensional schrdinger equation into a two - dimensional free - electron motion and a one dimensional effective problem . ]. a quantum entity , like an electron , is an entity characterized by some intrinsic properties , like its spin , its charge and its mass ; these are attributes that will remain constantly actual , for as long as the entity exists ( i.e. , is not destroyed ) .in addition to them , the state of the quantum entity is characterized by a number of non - intrinsic properties , whose actuality and potentiality may vary as time passes by , like for instance the property of `` being present in a given region of space , '' `` having the momentum in a given cone , '' `` having the spin oriented in a given spatial direction , '' and so on .when the entity interacts with a force field , described by a potential function , it can typically be in a bound state , and therefore remains localized in the interaction region for all times , or be in a scattering state , propagating away from any bounded region , as , '' we should say that it``has a high probability of being detected far away from any bounded region , as . ''let us assume that the quantum entity under consideration is , at time , in a scattering state , which we assume to be one - dimensional , i.e. , . in a typical scattering experiment ,one makes sure to prepare the entity , in the remote past , in a suitable free evolving state , where is the free hamiltonian .this can be expressed by the asymptotic condition : , as , where the symbol `` '' denotes that the difference between the left and right sides tends to zero .let us assume that the entity approaches the interaction region from the left .this means it has been prepared in the past in a state of positive momentum .more precisely , if is the projection operator onto the set of states of positive momentum , i.e. , the set of states actualizing the property of `` having a positive momentum , '' then approaching the potential from the left means that .a one - dimensional scattering process can be used as an experimental project to test two specific properties of the quantum entity : _ transmissibility _ and _ reflectivity _ [ more exactly , one should speak of -transmissibility and -reflectivity , as transmission and reflection are defined only in relation to a specific potential and initial state .for the transmissibility ( resp .reflectivity ) test , the operations to be performed are the following : observe , by means of a suitable measuring apparatus ( the details of which we do nt need to describe here ) if the entity which has been duly prepared in the past , will be detected , in the distant future , in the right side ( resp . left side ) of the potential region , far away from it .if the apparatus reveals the presence of the entity in the mentioned spatial region , as , the test is considered successful ( `` yes '' answer ) and the property of transmissibility ( resp .reflectivity ) will be said to have been confirmed .let us limit our considerations to the transmission case ( the reflection one , _ mutatis mutandis _ , being similar ) .the property `` being present in the far right of the potential '' can be associated in qm to the projection operator , onto the set of states localized in the spatial interval , where is a positive large number .accordingly , transmissibility can be defined as the property for the scattering entity of `` being present in the far right of the potential in the distant future , '' i.e. , the property that , as . if the incoming entity is a classical particle , by knowing its state we can easily predict in advance , with certainty , the outcome of the above transmissibility test , in accordance with the classical prejudice mentioned in the previous section .indeed , a classical particle will be transmitted if and only if its incoming energy is strictly above the potential , i.e. , iff . on the other hand , inqm a full knowledge of the entity s state ( at whatever moment ) is not sufficient to pre - determine the outcome of the transmissibility test . in other terms, transmissibility remains a potential ( uncreated ) property , for as long as the test is not performed , and can only possibly be actualized ( created ) if one effectively performs it .as any student of qm learns , the best one can do is to predict the outcome of the transmissibility test in probabilistic terms , using for this the _ born rule _( which was formulated by born in a 1926 paper , precisely in the context of a scattering problem ) .more precisely , following the above discussion , the quantum probability for a successful outcome of the transmissibility test ( simply called `` transmission probability '' ) is given by : the second equality in ( [ transmission probability definition ] ) follows from the future asymptotic condition , as ( being the unitary scattering operator ) , whereas the first equality in ( [ transmission probability definition2 ] ) expresses the intuitively evident fact that the probability of finding the scattering entity in the region , as , is the same as the probability for the entity to propagate in the direction in which it will eventually penetrate into that region , which in the present case corresponds to the probability of having positive momentum ( mathematically , this fact follows from the well known dollard s scattering - into - cones formula ; see for instance ) .finally , the last equality in ( [ transmission probability definition2 ] ) simply follows from the fact that =0 $ ] . at this point , considering that the scattering operator commutes with the free hamiltonian and that , by defining the transmission operator , we can write the transmission probability ( [ transmission probability definition2 ] ) in the form : therefore , assuming that the incoming entity from the left has been also prepared in order to actualize the property of `` having a well defined energy , '' meaning that the incoming wave packet is sharply peaked about , say , energy , so that : of course , a similar approximation holds for the reflection case , yielding for the quantum reflection probability : where is the on - shell element of the reflection ( from the left ) operator , at energy .having clarified the meaning of transmissibility and reflectivity in a quantum scattering process , and derived the corresponding transmission and reflection probabilities , we want now to explicitly calculate them in the simple case of a delta - function potential : .this can be easily done by directly solving the stationary schrdinger equation : \psi(e , x)=e\psi(e , x),\ ] ] with boundary conditions ( describing an entity coming from the left ) where and are the reflection and transmission amplitudes , at energy .continuity of at , yields : integrating ( [ stationary schroedinger equation ] ) from to , using the properties of the delta - function , then taking the limit , one obtains the second equality : =\left(\frac{2m\lambda}{\hbar^2 } - ik\right)t(e).\ ] ] by combining ( [ equality1 ] ) and ( [ equality2 ] ) , one then obtains : where we have defined .finally , taking the square modulus of the above amplitudes , one gets the quantum transmission and reflection probabilities at fixed energy : in the following , to simplify the discussion , we set the coupling , so that and , simply : this section , we describe the ( dirac ) -quantum machine and its functioning , and show that it reproduces the quantum transmission and reflection probabilities ( [ transmission and reflection probabilities2 ] ) .the entity under study , that we shall call , is a macroscopic compound object made of tiny spheres , having all same mass and density , which can either be positively or negatively electrically charged .the spheres are assumed to be able ( in normal conditions ) to remain in contact together , for instance because they are slightly magnetic , thus forming a whole cluster - entity .entity possesses many distinctive attributes that characterize its identity , like the number of its components , its total mass , with the mass of a single sphere , the material it is made of ( that we do nt need to specify here ) , and many other as well . andof course , for to continue to exist in our physical space , all these defining attributes have to remain actual .but , besides its more stable attributes , can also assume different states .for instance , it can occupy different spatial locations , have different orientations , shapes , and so on .some of these states are the result of specific preparations , i.e. , determinative processes through which specific states for are selected .others can be the result of _ measurement _ processes which , contrary to preparations , are in general non - determinative , but only interrogative , so that their outcomes can not in general be predicted with certainty . in the following we are interested in those preparations that correspond to the different electric charges that can support .as we said , we assume that each one of the constituent spheres can either assume a positive electric charge , or a negative charge , but can not be electrically neutral .therefore , can be prepared in different electric states , each characterized by a specific electric charge : where and are the number of spheres having positive and negative electric charge , respectively , and .for instance , an entity can be prepared in 6 different electric states , characterized by the charges : . for later convenience, we also introduce the variable and observe that the charge can be entirely expressed in terms of by the formula .hence , we can equivalently parameterize the electric states of by using instead of .once we have prepared the entity in a given electric state , we may want to perform some experiments , like for instance a scattering experiment .for this , we use a specific experimental apparatus , consisting of a box with a left upper entry compartment , and two left and right lower exit compartments ( see figure [ delta quantum machine ] ) .the box contains some mysterious mechanism that causes the entity which is introduced inside the left upper compartment to either exit in the right lower compartment or in the left one . andwe shall say that the entity has the property of _ transmissibility _ ( resp . _ reflectivity _ ) if , once introduced in the box , it ends its run in the right ( resp. left ) lower exit compartment , with certainty .more precisely , the experimental protocol is as follows : prepare entity in a given state and place it in the entry compartment , wait until the machine stops producing noise , then look in the two lower exit compartments . if you find in the right one , the outcome of the experiment is a `` yes '' if the experiment is used to test transmissibility , and `` no '' if it is used to test reflectivity .conversely , if you find in the left one , the outcome is `` no '' if the experiment is used to test transmissibility , and `` yes '' if it is used to test reflectivity .now , the mechanism inside the box , that we are going to describe below , is such that if one performs for each incoming state a number of experiments , then calculate the relative frequencies of transmitted and reflected events , one finds that these relative frequencies converge ( as the number of experiments increases ) to the quantum transmission and reflection probabilities ( [ transmission and reflection probabilities2 ] ) .in other terms , entity and the measuring apparatus constitute a dirac _ -quantum machine _, in the sense that the system is isomorphic ( from a probabilistic point of view ) to the one - dimensional scattering of a quantum entity by a delta - function potential .let us now reveal the mystery and describe the interior of the machine ( see figure [ delta quantum machine ] ) .once the entity has been introduced in the entry compartment , it rolls down along a tube and falls inside a central internal compartment . due to the impact against the walls of the compartment , the composite entity ( which is quite fragile , as the magnetic cohesion of the spheres is low ) breaks into its components .then , a specific shutter mechanism selects a single sphere and lets it fall exactly in the middle of the two charged plates of a parallel - plate capacitor ( condenser ) . assuming that the left and right plates are positively and negatively charged , respectively , as it falls the sphere is deviated to the right if its charge is , or to the left if its charge is . and , at the end of its fall , it lands on an horizontal lever which is in equilibrium on its pivot ( like a seesaw ) .if the falling particle is positively charged , its landing point will be on the right side of the pivot and therefore its weight will cause the lever ( which makes a single whole with the two exit compartments ) to go down to the right . in this way , the sphere will reach the right exit compartment and remain there , causing the lever to maintain its inclination to the right ( or to the left , for a negatively charged sphere ) .then , after a little while , the automatic shutter mechanism frees another sphere , which can either be positively or negatively charged .if it is positive , it will fall to the right and reach the first positive sphere inside the right exit compartment . on the other hand ,if it is negative , the capacitor electric field will cause it to deviate to the left and land on the left side of the lever pivot .however , since the first sphere already reached the far right of the lever , its torque ( moment of force ) will not be sufficient to turn the lever to the left .thus , following a brief exploration of part of the left hand side of the lever , it will revert its motion and also end its run inside the right compartment .the process continues in this way , with the shutter mechanism releasing one sphere after the other ( only one sphere at a time passes through the capacitor ) , with all of them ending their journey either inside the right compartment ( if the first sphere was positive ) or inside the left one ( if the first sphere was negative ) , rebuilding in this way the whole composite entity .now that we have described the internal working of the box , which therefore is no longer a mystery , we are in a position to calculate the transmission and reflection probabilities .the calculation is very simple , as the transmission probability is nothing but the probability that the first sphere selected by the shutter mechanism is positive , which is given by the ratio : similarly , the reflection probability is given by the ratio in other terms , the macroscopic -quantum machine exactly reproduces the quantum transmission and reflection probabilities ( [ transmission and reflection probabilities2 ] ) .( of course , the machine can not reproduce every scattering process associated to every incoming energy in the interval , but only a finite subset of them , precisely those for which the incoming energy is of the form ) .let us now analyze the functioning of the -quantum machine using the language of aert s hidden measurement approach .first of all , we can observe that the only circumstances in which we can predict in advance if will be transmitted or reflected are the and situations , which are the analogues of the low and high - energy limits in potential scattering .these correspond to an entity which has been prepared in electric states only composed of negatively or positively charged spheres , so that the first sphere selected by the shutter mechanism will certainly be negative or positive .in other words , for the preparation the reflectivity property is actual for , whereas it is the transmissibility property to be actual for the preparation .on the other hand , for states such that , it is impossible to predict if the particle will be transmitted or reflected .this because it is impossible to predict which sphere will be selected first , as the experimenter can not control the way in which rolls down in the inlet tube and disassemble by colliding against the walls of the internal compartment , thus causing one of the spheres to be selected by the shutter mechanism . in other terms , if the composing spheres are not all positively or negatively charged , each time we introduce the entity in the entry compartment , we can not know in advance which one of its composing spheres will be chosen first by the shutter mechanism .this means that the measurement involves an element of choice between `` hidden '' measurements , each one corresponding to a different possible selection for the first sphere by the shutter mechanism . andsince these choices are not under the control of the experimenter , this is the reason why we can say that the measurement process involves hidden measurements , and that it is the lack of knowledge about which individual deterministic measurements is selected that is responsible for our inability to predict in advance the outcome of the experiment , but only evaluate it in probabilistic terms .in this section we want to enlarge the class of measurements that we can perform on entity , and will do so by modifying the functioning of the shutter mechanism in the box .this will allow us to derive transmission and reflection probabilities which can not be described neither by a classical nor by a quantum scattering system .given an entity , with a specific electric state , we consider now a set of structurally different measurements , indexed by an integer number ( not to be confused with the momentum defined in sec .[ delta - scattering ] ) . in a -measurement, the shutter mechanism selects each time spheres simultaneously , and let them fall , at the same time , in the charged plates of the capacitor . if the majority of the selected spheres are positively ( resp . negatively )charged , the number of them which will be deviated to the right ( resp . left ) will be greater than the number that will be deviated to the left ( resp .right ) , thus causing the lever to go down to the right ( resp .left ) , so that all spheres will finally end their run inside the right ( resp .left ) compartment .the process then continues , with the shutter mechanism selecting another tranche of spheres ( or less , if there are not enough left ) , and will do so until all spheres will have been released , in successive tranches , in the capacitor .clearly , for the same reasons we have explained in the analysis of the previous section , once the first tranche of spheres has caused the lever to slant right ( resp .left ) , the following tranches of ( or less ) spheres will not be able to subsequently change the inclination of the lever , so that all spheres will in the end reassemble in the right ( resp .left ) compartment , thus recreating the whole entity .but what if is even and the first selected tranche of spheres contains exactly positive and negative spheres ? in this case , a same amount of spheres will be deviated on the left and on the right by the capacitor s electric field .thus , an equal number of them will land on the left and right sides of the pivot .this is clearly a symmetric situation .however , it is also an unstable one , and the slightest fluctuation in the system will finally break the symmetry and cause the lever to tilt either to the left or to the right . andbecause nothing favor the left or right tilting , we can associate a probability to the two outcomes .let us now calculate the transmission probability for a -measurement , , and an entity prepared in an electric state .the calculation is quite simple , as the transmission probability is nothing but the probability that the total electric charge of the first tranche of selected spheres is strictly positive , plus times the probability that the charge is zero . using the _ binomial coefficient _ and considering first the case where is an odd integer , i.e. , ( meaning that the zero charge circumstance can not arise ), we have : on the other hand , for even , i.e. , , we have the weighted formula : these formulae can be explicitly evaluated for different values of .for instance , a straightforward calculation yields for : for , one finds : longer explicit formulae can easily be written for higher values of . in general, one finds that probabilities for increasing are pair wise equal , i.e. , , for ( a combinatorial fact that we shall not prove in this article , as it has no particular relevance for our discussion ) .considering entities , and , an explicit calculation yields , for the transmission probabilities , the values given in tables [ table3 ] , [ table5 ] and [ table7 ] , respectively ..[table3]the transmission probabilities ) for a compound entity , made of spheres . [ cols="^,^,^,^,^",options="header " , ] we can now observe that ,as increases , we do nt reach a strict classical regime , where the transmission probability , as a function of the incoming energy , is either or .indeed , for the case ( i.e. , ) , the transmission probability is , independently of the -measurement considered . however ,also in the even case we can say that the -measurement [ or -measurement , as they are identical from a probabilistic point of view ] can be understood as a classical process . indeed ,also in a classical system it can happen that the incoming energy is such that . in this circumstance ,the incoming particle approaching the potential will slow down and stop , right at the point where the potential reaches its maximum ( or at the first of these points , if there are more than one ) .this however corresponds to a situation of unstable equilibrium , which in real systems will be easily destroyed by the slightest perturbation , causing the particle to be finally transmitted or reflected . and, if nothing _ a priori _ favors one of the two processes , the best one can do is to attach an equal probability of to both of them . in other terms , one can also compare the and measurements , for even , to classical measurements , provided one assumes that they correspond to a situation such that , so that a particle with incoming energy will be captured by the potential , in a situation of unstable equilibrium , for a certain amount of time , until a random fluctuation will cause it to escape , either to the left or to the right , with equal probability .in the previous section we have analyzed , in some detail , the functioning of what we have called the -model : a structurally more complex system generalizing the -quantum machine . as increases , we have seen that one gradually switches from a situation of _ maximum lack of knowledge _ , described by a purely quantum process , to a situation of _ minimum lack of knowledge _ , described by a purely classical process , which is reached for , in the case where is odd , and for , in the case where is even . in other terms , as increases , we gradually decrease our lack of knowledge about the hidden measurements chosen by the machine . intuitively , this can be understood by observing that the bigger is the size of the first fragment of selected by the shutter ( i.e. , the bigger is ) , the easier it is to predict its total electric charge ( which is responsible for the outcome ) .indeed , the bigger is the size of the fragment selected and the closer is its charge to the charge of the whole entity .therefore , the better can we approximate its value .more precisely , our ability to predict the outcome of a given experiment , for an entity prepared in a state , is determined by the ratio of the number of favorable hidden experiments to the total number of hidden experiments , which can be selected in a typical -measurement , as expressed by formulae ( [ n - odd ] ) and ( [ n - even ] ) .what is interesting to observe is that in the intermediate ( quantum - like ) situations , which are neither classical nor quantum , there are states for which , although ( i.e. , although the spheres composing are not all of the same charge ) , we are nevertheless in a position to predict with certainty the outcome .this happens each time that , where , i.e. , each time that exceeds twice the minimum number of equally charged spheres composing .as we have mentioned in the introduction , our -quantum machine and -model have been inspired by aert s spin - quantum machine and -model .let us briefly recall what are the basic elements constituting aert s spin - quantum machine , whose functioning is isomorphic to the description of the spin of a spin entity .aerts considers an entity which is a simple point particle localized on the surface of a three - dimensional euclidean sphere of unit radius , the different possible states of which are the different places the particle can occupy on it .the particularity of the model resides in the way experiments are designed . indeed , to observe the state of the entity the experimental protocol is to use a sticky elastic band that is stripped between two opposite points of the sphere s surface , identified by two opposite unit vectors ( each couple of points defining a different experiment ) .then , the procedure is to let the point particle fall from its original location ( specified by a unit vector ) orthogonally onto the elastic and stick to it , then wait until the latter breaks , at some unpredictable point , so that the particle , which is attached to one of the two pieces of it , will be pulled to one of the opposite end points , thus producing the outcome of the experiment , i.e. , the state that is acquired by the entity as a result of the elastic -measurement ( see fig .[ spin quantum machine ] ) .it is then straightforward to calculate , with some elementary trigonometry , the probabilities of the different possible outcomes and show that they exactly reproduce those obtained in typical stern - gerlach measurements on spin- quantum entities .indeed , the probability that the particle ends up in point is given by the length of the piece of elastic between the particle and the end - point , divided by the total length of the elastic ( which is twice the unit radius ) .therefore , if is the angle indicated in figure [ spin quantum machine ] , between vectors and , we have that the probability for the outcome is given by : which is exactly the quantum probability for measuring the spin of a spin- quantum entity . on the basis of his spin - quantum machine ,aerts then considers a more general machine , called the -model , employing elastics of a more complex structure .more precisely , aerts introduces what he calls -elastics ( we describe here a simplified version of the model , presented in ) which are uniformly breakable only in a segment of length around their center , and unbreakable in their lower and upper segments ( see figure [ spin epsilon machine ] ) . an measurement ( i.e. , a measurement using a uniformly breaking -elastic ) corresponds to the pure quantum situation with a maximum lack of knowledge about the point where the elastic is going to breakthis is the situation of the simple spin - quantum machine that we have previously described , whose probabilities are given by ( [ probabilities quantum machine ] ) .an measurement ( i.e. , a measurement using a -elastic ) corresponds to a pure classical situation with minimum lack of knowledge , where the elastic is going to break with certainty in the middle ( i.e. , in a predetermined point ) . on the other hand , a general -measurement , with , using an -elastic which can ( uniformly ) break only around its center , in a segment of length , corresponds to a quantum - like situation ( which is neither quantum nor classical ) of intermediate knowledge .the associated probabilities are easy to calculate and one has to distinguish the following three cases : \(1 ) if the particle , when it falls orthogonally onto the elastic , lands on its upper unbreakable segment ( ) , then : \(2 ) if the particle , when it falls orthogonally onto the elastic , lands on its central uniformly breakable segment of length , ( ) , then : \(3 ) if the particle , when it falls orthogonally onto the elastic , lands on its lower unbreakable segment ( ) , then : clearly , the parameter plays in aerts s -model the same role of the -parameter in our -model : by varying it one varies the level of knowledge ( or level of control ) the experimenter has in relation to the experiment performed , describing in this way a ( here continuous ) transition from purely quantum ( ) , to quantum - like ( ) , to purely classical regimes ( ) ; see for more details about this transition .of course , there are many differences between our -quantum machine and corresponding -model , and aert s spin - quantum machine and corresponding -model .one is the obvious fact that they modelize different physical systems and therefore yield different probabilities .another one is the greater structural richness of aerts -model .this is so not only because the parameter is continuous , whereas the parameter is discrete , but also because for a given and state of the point particle entity ( specified by the unit vector ) , there is in aerts model an infinity of different possible experiments , corresponding to the different possible orientations of the elastic ( specified by the unit vector ) . on the other hand , for a given and state of , there is in our model only a single possible experiment .of course , this is how it should be , seeing that there is only a single spatial direction in a one - dimensional scattering experiment .however , this greater structural richness which is present in aerts model becomes essential if one wants to rigorously prove the non - kolmogorovian nature of the probability model involved , as for this at least three different experiments are needed . on a different level , there is another important difference between the two models : in aert s quantum machine the `` breaking mechanism , '' which is at the origin of randomness , is associated to the measuring apparatus ( the breaking of the elastic band ) , whereas in our quantum machine the `` breaking mechanism '' is associated to the entity itself , which during the course of the measurement is temporarily disassembled .in other terms , in aert s model the entity under study , a classical point particle , always remains present and localized in our three - dimensional euclidean space , during the entire experiment . on the other hand , entity of our modelis present in our three - dimensional space only at the beginning of the experiment , when it is introduced in the machine , in a given electric state , and at the end of it , when its presence is again observed in one of the two left and right exit compartments .following the discussion of the previous section , a natural question arises : what happens to entity during the course of the experiment ?as the functioning of the machine presents no mysteries , we can easily answer this question . for this, we first have to remember that entity exists in our three - dimensional space for as long as it conserves its identity of being a whole , cohesive cluster - entity , made of charged spheres .therefore , in the moment the entity disassembles by falling inside the central internal compartment of the machine , it disappears from our sight , i.e. , it disappears from our three - dimensional space . in other terms , during the measurement process , is temporarily destroyed and , in its place , smaller entities are created .these smaller entities , which are fragments of , interact separately with the different elements of the machine , before being all reassembled together , inside one of the two exit compartments .this means that the _ mode of being _ of changes during the different phases of the measuring process . it_ actually exists _, in the sense that it is present in our three - dimensional space , when it is introduced in the machine ; it _ potentially exists _ , in the sense that it is no longer present in our three - dimensional space , when it interacts with the different elements of the machine ; it comes again into actual existence , in the sense that it re - emerges ( or re - immerge ) in our three - dimensional space , when its different fragments are brought once again together .what is interesting to observe is that during the measurement process the different fragments originating from explore different regions of the three - dimensional space .some of them , in certain moments , can be found on the lever on both sides of the pivot , so that , before the measurement is terminated , we can say that the potential entity is in a sort of superposition of partially transmitted and reflected components .therefore , entity , while it is in its p__otential mode of being _ _ , which is a _ non - spatial _ mode of being ( relative to our three - dimensional space ) , can behave as a genuine _ non - local _ entity , i.e. , an entity made of parts which , seen from our ordinary three - dimensional perspective , appears to be separated and independent , but seen from a non - ordinary perspective , are still connected and form a whole .this means that , as it has been emphasized by aerts in a number of papers , _ non - locality _ of quantum ( or quantum - like ) entities is first of all a manifestation of _ non - spatiality _ ( see also the arguments presented in ). this point being rather subtle , let us try to explore it a little further .for this , we can observe that our three - dimensional euclidean space is a very specific `` theatre '' that we humans have isolated from the rest of reality , through the cognitive filters that emerged from our experiences with macroscopical entities .these macroscopic entities , aerts explains , can be characterized by what he calls the property of _ macroscopic wholeness_. more precisely , quoting from : * macroscopic wholeness*. for macroscopic entities we have the following property : if they form a whole ( hence are not two separated parts ) , then they hang together through space . which means they can not be localized in different macroscopically separated regions and of space , without also being present in the region of space ` between ' these separated regions and . in other terms , from our ordinary spatial perspective , a composite ( decomposable ) entity exists as such , i.e. , as a whole , for as long as its composing parts remain connected together _ through space_. however , we may ask if there are other possibilities in reality for the composing parts of an entity to remain connected together , apart from `` through space . ''considering our -model , we can observe that , although is disassembled during the measurements process , nevertheless , in the end , it gets _ necessarily _ reassembled .this means that the fragments of remain ( invisibly ) dynamically connected through the specific structure of the measuring machine and , therefore , although temporarily spatially separated , they are nevertheless `` hanging together '' in a more subtle way .the present discussion , as evident as it might appear , touches at the heart of our understanding of physical reality , and more particularly of our understanding of the fundamental concepts of spatiality and non - spatiality , actuality and potentiality , soft and hard act of creations , and macroscopic wholeness .as we are going now to explain , thanks to the intuitions we have gained from our -model , all these concepts are in fact intimately related .let us start with the concept of potentiality .* potentiality*. as we have explained in the introduction , a property is potential if it is not actual , i.e. , if one can not predict with certainty , even in principle , the `` yes '' outcome of the associated test .potentiality however , can either be deterministic or indeterministic .more precisely , we shall say that a property is _ deterministically potential _ if the `` no '' outcome can be predicted with certainty .on the other hand , we shall say it is _ indeterministically potential _ ( or , which is equivalent , _ indeterministically actual _ ) if it is neither actual nor deterministically potential , which means that the `` yes '' and `` no '' answers have both a certain propensity to manifest , but none of them can be predicted with certainty , i.e. , the associated probabilities are strictly different from and . for instance , for a classical particle , if transmissibility is actual , then reflectivity is deterministically potential , and vice versa . on the other hand , for a quantum entity ,apart from the high and low energy regimes , transmissibility and reflectivity are indeterministically potential .finally , in the quantum - like intermediate situations , both deterministic and indeterministic potentiality can be present in the system .let us now consider what is the most fundamental property for any entity : _existence_. in the case of , we can identify such a property with the one of _ macroscopic wholeness _ :entity exists if macroscopic wholeness is actual , i.e. , if it forms a cohesive whole through space , in the sense defined by aerts above .clearly , at the beginning of the scattering experiment , the property of macroscopic wholeness of is actual , and one can say that is in its _ actual mode of being_. however , as soon as it falls in the inner compartment , and breaks in several pieces , its macroscopic wholeness becomes deterministically potential .accordingly , one can say that enters into a _ potential mode of being_. and , as soon as the machine completes the measurement , macroscopic wholeness is restored , and the mode of existence of becomes actual again .it is worth emphasizing that when we speak of the actuality or potentiality of a property ( be it deterministic or indeterministic ) , it is always relative to a given moment of time ( typically , the moment at which the outcome of the test that defines the property becomes available , if one would chose to perform it ) .let us also observe that , as soon as is destroyed , during the measuring process , it literally disappears from our ordinary spatial perspective .but , if this is true , in what sense can we nevertheless say that still exists , although not in the actual sense ? in other terms , does the statement `` is potentially existing '' have some objective correspondence in our reality , or is it just a way of saying , a heuristic statement that we must take care not to reify ? as we said , during the experiment , entity temporarily disassembled , i.e. , destroyed .following coecke s distinction between soft and hard acts of creations ( see the introduction ) we can say that the machine performs , during the measurement , a succession of hard acts of creation on ( and on its fragments ) , and although most of these acts are destructive , as they destroy the macroscopic wholeness of , if taken all together they constitute in fact a soft act of creation , as is clear from the fact that is recreated in the final stage of the experiment , _ with certainty_. what is crucial to understandis that _ the machine acts in a deterministic way in the process of actualization of the potential existence of _ , in the sense that if we exclude anomalies ( such as for example an experimenter who , by distraction , would hit the machine and make it fall to the ground while it is functioning ) , we can predict with certainty that at the end of the measurement will be reassembled . to put it another way ,if nothing perturbs the functioning of the apparatus , the different fragments of will never lose their _ mutual coherence _, through the mediating structure of the machine , and thus remain ( dynamically ) connected _ through time _ in such a way that their `` hanging together '' _ through space _ will be guaranteed at the end of the process .these considerations lead us to define the new concept of _ process - actuality_. * process - actuality*. a property is _ process - actual _ ( _ p - actual _ ) , in a given moment , if it is actual in that moment or , if not , it will become actual in a subsequent moment , with certainty . in other terms , a property is p - actual at time , if there exist a time , such that the property is actual at ._ : actuality p - actuality .thanks to the notion of process - actuality , we can define the following properties : * process - macroscopic wholeness*. an entity possesses the property of _ process - macroscopic wholeness _ ( _ p - macroscopic wholeness _ ) , in a given moment , if the property of macroscopic wholeness is p - actual in that moment ._ : macroscopic wholeness p - macroscopic wholeness .* process - existence*. an entity exists in the process sense ( _ p - existence _ ) , in a given moment , if its existence is p - actual in that moment .( this means , in particular , that some of the entity s intrinsic defining properties are p - actual in that moment ) ._ : existence p - existence .in this section we exploit the process - actuality criterion to provide a clear definition and characterization of the important notion of non - spatiality . for this, we start by observing that _ existence _ and _ spatiality _ are intimately related concepts .indeed , to exist is to exist in a given space , i.e. , in the space to which belong the measuring apparatus that are used to test the properties and attributes of the entity under consideration .entities with different attributes can belong to a same space , and interact together in some way ( by `` belonging to '' a space we do nt only mean `` to be present in '' a space , but , more generally , `` to be detectable in '' a space ; see in this regard the discussion in , sec .2 ) . on the other hand , within a same space , one can also identify subspaces , i.e. , substructures that are characterized by the specific attributes of the entities that , by definition , belong to them . considering our _ physical space _ , we can certainly highlight in it an important subspace , that we can simply call the _ ordinary physical space _: * ordinary physical space*. the _ ordinary physical space _ ( ) is that part of our physical space ( ) that contains entities for which the property of macroscopic wholeness is actual ._ : .is isomorphic to the three - dimensional euclidean space ? that s possible , but not certain , as we can not a priori exclude the existence in our physical space of , say , four - dimensional macroscopically whole entities ( think about abbott s metaphor of flatland ) .also , macroscopic wholeness may not be a sufficient condition to characterize as our 3- space , and other attributes may be needed for this .however , not to complicate the discussion , we shall assume in the following that , as defined above , is indeed isomorphic to the three - dimensional euclidean space .let us now define what we shall call , for lack of a better term , the _ extraordinary physical space _ :* extraordinary physical space*. the _ extraordinary physical space _ ( ) is that part of our _ physical space _ ( ) that contains entities for which the property of macroscopic wholeness is p - actual ._ : . with the above definitions, we can also define the following two spaces ( see fig .[ spaces ] ) : * intermediate physical space*. the _ intermediate physical space _ ( ) is that part of that contains entities which are not in , i.e. , for which macroscopic wholeness is deterministically potential .in other terms , in a set - theoretical sense : .* hyperordinary physical space*. the _ hyperordinary physical space _ ( ) is that part of that contains entities which are not in , i.e. , such that p - macroscopic wholeness is deterministically potential . in other terms , in a set - theoretical sense : ._ : the exact characterization of entities belonging to , if any , is unknown .we are now in a position to propose a precise definition of non - spatiality , as this notion is conventionally used ( also by the present author ) in connection with quantum and quantum - like entities .* non - spatiality*. a non - spatial physical entity is , by definition , an entity that belongs to the intermediate physical space .this means that non - spatiality is not a condition of absence of spatiality , but a condition of intermediate spatiality , such that ordinary spatiality and hyperodinary spatiality are absent .let us illustrate the content of the above definitions , using the guiding example of entity . at the beginning of the experiment , which as a whole is a soft act of creation , exists in , as a macroscopically whole entity .then , in the course of the experiment , it ceases to manifest in , but is not for this totally annihilated , as it continues to exist in , as a p - existing , p - macroscopic whole entity ( a condition improperly called of non - spatiality ) . then , at the end of the experiment , it manifests again in , by acquiring once more the property of macroscopic wholeness . considering microscopic quantum entities , like for instance an electron ,we are now equipped with some interesting conceptual tools that allow us to describe what might possibly happen between the preparation of the entity , at the beginning of a typical quantum measurement , and the `` click in the counter , '' at the end of it .if we assume that the conceptual framework we have so far explored with the help of our model is pertinent , we can think of a microscopic entity , like an electron , as a sort of composite entity . in some instances , when it is fully assembled in , in a state of macroscopic wholeness , we are able to `` see '' it , with the `` eyes '' of our macroscopic instruments ( which also belong to ). on the other hand , in some other instances , the electron - entity may disappear from our `` sight '' , by losing its macroscopic wholeness , i.e. , its wholeness through , which then becomes a process - like , dynamical form of wholeness .in these moments , the electron only p - exists , but nevertheless still exists , within , in a condition where its macroscopic wholeness is deterministically potential . nevertheless , since the electron has not been destroyed , having been acted upon not by a hard act of creation , but by a soft act of creation ( more precisely , by a succession of hard acts of creation whose overall effect results in a soft act of creation ) , it will finally demanifest from that `` non - spatial '' realm , to manifest again in our ( three - dimensional ) ordinary space , by restoring its macroscopic wholeness .quoting aerts from : `` reality is not contained within space .space is a momentaneous crystallization of a theatre for reality where the motions and interactions of the macroscopic material and energetic entities take place .but other entities - like quantum entities for example - ` take place ' outside space , or - and this would be another way of saying the same thing - within a space that is not the three dimensional euclidean space . ''according to our analysis , the space mentioned by aerts in the above excerpt is our ordinary physical space ( ) , whereas the other space he mentions , that is not the three dimensional euclidean space , is the intermediate space , which is included in the larger physical space , and should certainly be considered as a part of our physical reality not less objective than .but then , if this is so , why ca nt we see , with our eyes , this intermediate theatre of reality ?a possible answer is because , in our construction of reality ( and knowledge about reality ) through the instrument of our highly noun - oriented language ( particularly in western countries ) , we have ended up developping a much more `` structure - oriented '' than `` process - oriented '' view . in other terms , we have developed more the tendency to observe reality as a collection of snapshots , rather than as a collection of continuous movies , each one endowed with its indeterministic aspects ( related to our present and future acts of creations ) and deterministic aspects ( related to the effects of our past creations ) .each one of these snapshots , or moments , creates the illusion of a static three - dimensional theatre , filled with ordinary objects , all characterized by the property of macroscopic wholeness . in other terms , by only creating our reality on `` instants , '' we generate the illusion of a `` snapshot - space , '' which we believe then to constitute a unique all inclusive theatre .however , as we expand the consciential crack through which we look to the world ( for instance by becoming aware of the auua , the additional unconsciously used assumption that are present in our language and cognitive processes ) , we may realize that our rough cognitive filters are in fact screening us from the more dynamical ( process - oriented ) vision of the innumerable non - spatial entities , which are also objectively participating to our reality , although in a different mode of being .let us point out that such a perceptual expansion is not about simply replacing our naive three - dimensional spatial theatre with an equally naive four - dimensional spacetime theatre , in which real change would nt at all be possible .however , discussing the very subtle aspects of the geometric and process views inherent in our construction of reality would go too far beyond the scope of the present work , and we refer the interested reader to aerts important contributions .in this paper we have proposed a new quantum machine model , which is able to modelize simple classical , quantum and quantum - like one - dimensional scattering processes .although our model is structurally much simpler than aerts -model , it has the advantage of providing what we think is a suggestive metaphor for quantum entities .this because it allows to visualize what could happen when a quantum entity apparently disappears from our ordinary `` view '' and , in the period of time before it is detected again , becomes a genuine `` non - spatial '' entity , i.e. , an entity which is not any more present in our ordinary physical space , but in an intermediate space , characterized by a teleological form of macroscopic wholeness . of course , there are many substantial differences between entity and a microscopic quantum entity , like an electron . apartthose we have already mentioned in the previous sections , there is the fact that the position of an electron is , generally speaking , an _ ephemeral property _ , which can only remain actual for a moment . on the contrary, has clearly the ability to remain stably present in our ordinary space , for an arbitrary amount of time .according to the view expressed in this work , the ephemeral character of the position of an electron ( or of any other microscopic entity ) would be the consequence of the ephemeral character of its macroscopic wholeness .this , by the way , is also the reason why macroscopic wholeness is so called , and is not called , for instance , microscopic wholeness ! an electron , contrary to , expresses the preference to remain in a state of p - macroscopic wholeness ( or possibly in a state of existence still different , corresponding to the hypothetical space ) and it is only when `` forced '' by the action of a suitable measuring apparatus that it can acquire , for an instant , the property of macroscopic wholeness , being consequently detected in a given position of our ordinary space . in that respect ,the quantum phenomenon known as the `` spreading of the wave packet , '' could very well be understood as a manifestation of this propensity of microscopic entities toward a more process - like form of wholeness and existence .another interesting aspect revealed by our model is that p - existence would be strongly dependent on the action of the apparatus upon the entity s composing fragments .in other terms , contrary to existence in the ordinary sense , p - existence would be highly contextual , as without a specific measuring apparatus , able to coherently guide the evolution of the composing parts of a microscopic entity , the phenomenon of superposition and non - locality would nt probably be possible . in that sense ,the very existence of microscopic entities , which most of their time are at best process - existing , would be much more contextual than the ordinary existence of macroscopic entities , which would express a more stable and context - independent condition of existence . to put itanother way , not only the behavior of a quantum entity , like an electron , would depend on the nature of the questions we address to it , by means of our experiments , but also its possibility of p - existing would depend on the very presence of those processes that are embodied by the ( coherence - preserving ) experimental apparatus .of course , the nature of the influence exerted by a measuring apparatus on the components of a microscopic quantum entity is quite different from the interaction that is responsible of the final detection of the entity , for instance in the form of a little spot on a screen or a click in a counter .a question then arises : if , in a sense , it is possible to understand a microscopic quantum entity , like an electron , as a sort of compound entity , and if it is true that the components of such a compound entity will generally spread out while it interacts with the context made manifest by a measuring apparatus , how comes that we never directly detect the presence of these components ?as we also discussed in the previous section , one can easily understand why we fail to detect the electron - entity , as it evolves inside the experimental apparatus : being in a `` spread out '' state , it only p - exists from the view point of our ordinary physical space .this is what our quantum machine model suggests : when is disassembled , although its composing fragments remain correlated through time , thanks to the mediation of the experimental apparatus , it disappears from our object - oriented -perspective , characterized by macroscopic wholeness .however , in our quantum machine model , we can nevertheless directly observe , if we open the machine s box , the tiny spheres forming , also when they are spatially separated .the reason for this is that the spheres have a double level of existence : they exist as the correlated components of a p - existing compound entity , but they also exist as individual entities .in other terms , each one of the composing elements of owns , in turn , the attribute of macroscopic wholeness , which is the reason why they can also be individually detected in our three - dimensional space .on the other hand , the situation of a microscopic entity , like an electron , would be different .indeed , its elementariness would prevent it from being decomposed in sub - entities that would in turn still possess the attribute of macroscopic wholeness .this suggests to define _ elementariness _ not as the property of an entity of not being made up of other entities , as one usually does , but as the property of an entity of not being decomposable into sub - entities that would also possess , in turn , the property of macroscopic wholeness .more precisely , in the speculative logic of the present work , we propose the following definition of elementariness : * elementariness*. an entity is _ elementary _ if it can actualize , at least ephemerally , the property of macroscopic wholeness , whereas its composing parts can not . in other terms , an elementary entity is an entity that can be detected in , whereas its components are confined in . considering the above proposed definition, elementariness would not be about being or not being decomposable ( as every entity can be assumed to always be decomposable , until proven to the contrary ) , but about the impossibility for the composing fragments to belong , even ephemerally , to .the difficulty we have in visualizing the above concept of elementariness , resides in the fact that we have the tendency to think about the composing parts of an entity in corpuscular terms . andthis is because most of our visualization tools are inherited from our three - dimensional experience of the macro - world , i.e. , from our experience with so - called ordinary _ objects _ , which are entities possessing , in a stable way , the property of macroscopic wholeness .clearly , this is where the metaphor of our quantum machine model ceases to be helpful in guiding our intuition .an electron - entity is not an ordinary object , and although we can imagine it as being decomposable , its composing parts are not in themselves elementary , but _ pre - elementary _ , as they strictly belong to a space which is beyond our ordinary level of experience .nonetheless , can we devise a way to detect these non - ordinary composing parts , these hypothetical sub - elementary `` partons , '' of an elementary entity ?a natural way to proceed would be to design experimental apparatus whose functioning would not be limited to .still , if we reflect attentively , we may realize that these non - ordinary machineries already exist , and could be nothing more than the sophisticated devices already present in our modern physics laboratories .these advanced instruments have indeed been carefully designed to reveal the quantum properties of physical entities , i.e. , to highlight the hidden and subtle connections between the sub - elementary composing parts of microscopic quantum entities , which are responsible of the observed non - local and superposition interference effects , typical of our quantum level of reality .of course , from our classical , ordinary viewpoint , it may not be easy to accept such evidence , and we could be tempted to believe that in quantum experiments we can never say what actually goes into them , and can only comment about their outputs .outputs , no doubts , are easier to comment , as they belong to . however ,if we observe the functioning of a typical quantum experiment with a more process - oriented perspective , we may conclude that , in fact , they do reveal much more than we are usually led to believe . to give an example , the so - called `` wave - particle duality , '' as observed in a double - slit experiment ,could be seen as an expression of the hidden connections that are present among the different components of an elementary entity , when in its process mode of being .waves indeed , can be understood as phenomena resulting from the coherent collective movement of a great number of correlated entities , forming the medium through which the wave - perturbation is said to propagate .but of course , the great difference between a classical wave and a quantum wave - like phenomenon lies in the fact that the former is a perturbation manifesting in our ordinary three - dimensional physical space , whereas the latter does nt .this bring us back to our mentioned cognitive blindness , in seeing what a quantum measurement really reveals us ; a blindness related to our bad habit of thinking to the entities populating our reality only in terms of corpuscles , or classical fields and waves .these images , as useful as they may be in the description of macroscopic entities , are nevertheless totally misleading if we use them in the description of microscopic ones ( be them elementary , like an electron , or non - elementary , like an atom or a molecule ) .but then , what would be a better notion to properly think about quantum or quantum - like entities ?a fascinating answer comes from aerts recent proposal to interpret quantum entities as ... _ conceptual entities _ !indeed , according to aerts , quantum entities would `` [ ... ] interact with ordinary matter , nuclei , atoms , molecules , macroscopic material entities , measuring apparatus , ... , in a similar way to how human concepts interact with memory structures , human minds or artificial memories . ''it s not our intention to go here into the details of this subtle explanatory framework and the interesting path that led its author to develop it , and refer the interested reader to aerts thought provoking articles .let us however use this interpretation to highlight one of the ideas we have put forward in this paper , inspired by our machine - model : the compoundness of an elementary particle . for this ,let us consider , as an example , the conceptual entity called `` apple . '' clearly , such a conceptual entity can manifest in our ordinary physical space , each time that a physical apple comes into being .if , for simplicity , we assume that apples ripen only in a very specific and short period of the year , and that soon after they are all eaten , then , similarly to an electron , we can say that an apple conceptual entity will live most of its time outside of our ordinary space , and just briefly enter it , when the great spring experiment is performed .now , considering an apple - object , which is the objectification of an apple conceptual entity ( in the same way as the spot we observe on a detection screen can be considered as the objectification of an electron conceptual entity ) , we can easily think of it as a compound entity , made of other objects , like for instance its peel , pulp , seeds , stem , and so on .clearly , all these connected parts individually belong to , as the whole apple - object does .but what about the apple conceptual entity , can we also understand it as a compound entity ?consider for instance the concepts `` typical '' and `` fruit '' .these two concepts , contrary to the apple - concept , can not be also understood as objects .indeed , there are no objects in corresponding to `` typical '' and `` fruit '' ( on the shelves of a grocery store one finds apples , pears , oranges , etc . , but not the fruits called fruit ! ) .considering however the connection ( through meaning ) of the `` typical '' and `` fruit '' concepts , we obtain the composed concept `` typical fruit , '' which , for almost every person , is nothing but an apple .in other terms , we have an example of a conceptual entity , an apple , which , from time to time , can manifest as an object in , and which can be understood as the combination two other conceptual entities , `` typical '' and `` fruit , '' that instead can not manifest inside of .this perfectly illustrates our above notion of elementariness .similarly to the apple conceptual entity , the electron conceptual entity can be understood as the composition ( combination ) of other conceptual entities , which , however , can not manifest , not even ephemerally , in .of course , the `` apple '' example is not a perfect example of an elementary conceptual entity , as it is also possible to decompose it in parts which are objects .an electron on the other hand , would be truly elementary because the only possible decompositions would be of the `` typical - fruit '' kind , and not of the `` peel - pulp '' kind .( in that sense , the apple example better describes an atom , or a molecule , than an elementary entity like an electron ) .much more should certainly be said about the conceptual status of quantum entities , to truly appreciate the explicative power of this interpretation , recently developed by aerts .also , much more should be said about the many fundamental notions we have just touched upon in this article , many of which are quite speculative and certainly deserve a larger space of analysis .we hope this larger space will be available in future works . before concluding , a word of warning is due .as we have seen , our machine model is quite evocative and seems to suggest that , in some way , quantum entities could be understood as some sort of composite entities , a view that , as we have briefly explained , is not incompatible with aerts conceptual interpretation of quantum mechanics .nevertheless , we would like to point out ( especially for the hasty reader that would have been left with a wrong impression ) that the -quantum machine model is not meant to suggest that quantum entities should be considered , in a literal sense , as composite entities made of minuscule undetectable classical ( bohomiam - like ) particles , able to spread all over the space .this no more than aerts spin - quantum machine model is meant to suggest that there is a real breakable elastic band hidden somewhere in a stern - gerlach magnet ! the truly interesting aspect about these models is not their ability to realistically describe physical entities as such , but to capture , by means of powerful structural analogies , the possible logic that would be at the basis of their interaction with the different experimental contexts , particularly for what concerns the emergence of quantum probabilities . on that purpose ,it is important to observe that aerts spin - quantum machine is able to capture the essence of the quantum probability structure without any need to assume that the entity under investigation has a composite structure .also , there are certainly ways to adapt the spin - quantum machine model to also characterize one - dimensional quantum scattering processes , as is clear from the fact that there is an isomorphism between a spinor and the two - component column vector representing an incoming / outgoing one - dimensional scattering state ( for a given energy ) . if we are saying all this , is to bring the reader to consider that the composite nature of entity , in our -quantum machine model , is not necessarily a fundamental logical ingredient in the description of a quantum entity .of course , _ compoundness _ seems to be important at some level of the description , as for instance also in the spin - quantum machine model the elastic is a composite entity , that can be disassembled ( broken ) in different ways .but we do nt know if this `` compoundness - breakability '' property is a _ sine qua non _ ingredient in the quantum description of reality and , if so , at what level it should be applied ( at the level of the entity , of the measuring apparatus , of both , etc . ) . in other terms , our words of caution are to point out that the really profound aspects revealed by the different machine models is probably not in what distinguishes them , in terms of details , but in what they have in common , at a structural level , like for instance a built - in mechanism that selects ( in a highly contextual way ) a deterministic `` hidden '' measurement , and the possibility of creating new properties during its execution .it is now time , to conclude , and leave the last word to aristotle , whose idea of causality , in his theory of movement ( and , more generally , of transformation ) expresses , in a way , the idea of process - existence that we have put forward in the present article . quoting from : `` [ ... ] _ everything that comes to be moves towards a principle , i.e. an end .for that for the sake of which a thing is , is its principle , and the becoming is for the sake of the end ; and the actuality is the end , and it is for the sake of this that the potentiality is acquired . _ ''i d like to express thanks to two anonymous referees , for the attentive reading of the manuscript and their helpful comments , which have contributed in significantly improving its presentation .i m also grateful to diederik aerts , for his support and interest about the subject of the present article . 40 segal , i. e. , ann . math .48 , 930 ( 1947 ) .emch , g. g. , mathematical and conceptual foundations of 20th century physics , north - holland , amsterdam ( 1984 ) .birkhoff , g. and von neumann , j. , ann . math .37 , 823 ( 1936 ) .jauch , j. m. , foundations of quantum mechanics , addison - wesley ( 1968 ) .piron , c. , `` foundations of quantum physics , '' w. a. benjamin inc . , massachusetts ( 1976 ) .foulis d. , randall c. , j. math ., 1667 ( 1972 ) .randall , c. and foulis , d. , a mathematical language for quantum physics , in les fondements de la mecanique quantique , ed . c. gruber et al , a.v.c.p ., case postale 101 , 1015 lausanne , suisse ( 1983 ) .gudder , s. p. , quantum probability , academic press , inc .harcourt brave jovanovich , publishers ( 1988 ) .accardi , l. , on the statistical meaning of the complex numbers in quantum mechanics , nuovo cimento , 34 , 161 ( 1982 ) .pitovski , i. , quantum probability - quantum logic , springer verlag ( 1989 ) .feynman , r. p. , `` the character of physical law , '' penguin books ( 1992 ) .aerts , d. , `` the entity and modern physics : the creation - discovery view of reality , '' in `` interpreting bodies : classical and quantum objects in modern physics , '' ed .castellani , e. princeton unversity press , princeton ( 1998 ) .aerts , d. , `` the stuff the world is made of : physics and reality , '' p. 129, in `` the white book of ` einstein meets magritte ' , '' edited by diederik aerts , jan broekaert and ernest mathijs , kluwer academic publishers , dordrecht , 274 pp .aerts , d. , `` quantum mechanics : structures , axioms and paradoxes , '' p. 141, in `` the indigo book of ` einstein meets magritte ' , '' edited by diederik aerts , jan broekaert and ernest mathijs , kluwer academic publishers , dordrecht , 239 pp .aerts , d. , `` a possible explanation for the probabilities of quantum mechanics , '' j. math , phys . , * 27 * , pp . 202210 ( 1992 ). aerts , d. , `` quantum structures : an attempt to explain the origin of their appearance in nature , '' international journal of theoretical physics , 34 , 1165 ( 1995 ) .aerts , d. and sozzo , s. , `` contextual risk and its relevance in economics , '' arxiv:1105.1812v1 [ physics.soc-ph ] .aerts , d. and sozzo , s. , `` a contextual risk model for the ellsberg paradox , '' arxiv:1105.1814v1 [ physics.soc-ph ] .aerts , d. and durt , t. , `` quantum , classical and intermediate , an illustrative example , '' found .phys . , * 24 * , 1353 ( 1994 ) .piron , c. , `` la description dun systme physique et le prsuppos de la thorie classique , '' annales de la fondation louis de broglie , * 3 * , pp .131 - 152 ( 1978 ) .piron , c. , `` mcanique quantique .bases et applications , '' presses polytechniques et universitaires romandes , lausanne ( second corrected edition 1998 ) , first edition ( 1990 ) .aerts , d. , `` description of many physical entities without the paradoxes encountered in quantum mechanics , '' found .phys . , * 12 * , pp . 11311170 ( 1982 ) .aerts , d. , `` an attempt to imagine parts of the reality of the micro - world , '' pp . 325 , in `` problems in quantum physics ii ; gdansk 89 , '' eds .mizerski , j. , et al . , world scientific publishing company , singapore ( 1990 ) .aerts , d. , `` the missing element of reality in the description of quantum mechanics of the epr paradox situation , '' helv .acta , * 57 * , pp .421428 ( 1984 ) .aerts , d. , `` the construction of reality and its influence on the understanding of quantum structures , '' int. j. theor .* 31 * , pp .1815 - 1837 ( 1992 ) .aerts , d. , `` the description of joint quantum entities and the formulation of a paradox , '' int. j. theor .* 39 * , pp . 485496 ( 2000 ) .coecke , b. , `` a representation for compound quantum systems as individual entities : hard acts of creation and hidden correlations , '' found . of physics , * 28 * , pp .11091135 ( 1998 ) .vassell , m. o. , lee , j. and lockwood , h. f. , `` multibarrier tunneling in heterostructures , '' j. appl .phys . , * 54 * , pp . 52065213 ( 1983 ) . born , m. , `` quantenmechanik der sto , '' z. phys .* 38 * , pp . 803827 ( 1926 ) .w. o. amrein , `` non - relativistic quantum dynamics , '' riedel , dordrecht ( 1981 ) . m. sassoli de bianchi , `` ephemeral properties and the illusion of microscopic particles , '' foundations of science , 16 , no . 4 pp . 393409 ( 2011 ) ; doi : 10.1007/s10699 - 011 - 9227-x .an italian translation of the article is also available : `` propriet effimere e lillusione delle particelle microscopiche , '' autoricerca , volume 2 , pp .3976 ( 2011 ) .m. sassoli de bianchi , `` from permanence to total availability : a quantum conceptual upgrade , '' to appear in : foundations of science ; doi : 10.1007/s10699 - 011 - 9233-z . m. sassoli de bianchi , `` time - delay of classical and quantum scattering processes : a conceptual overview and a general definition , '' arxiv:1010.5329v3 [ quant - ph ] , to appear in : central european journal of physics .aerts , d. , `` the origin of the non - classical character of the quantum probability model , '' in information , complexity , and control in quantum physics , eds .a. blanquiere et al , springer - verlag ( 1987 ) .aerts , d. , `` relativity theory : what is reality ? '' found .* 26 * , pp . 16271644 ( 1996 ) .aerts , d. , `` towards a framework for possible unification of quantum and relativity theories , '' int. phys . * 35 * , pp .23992416 ( 1996 ) .aerts , d. , `` a mechanistic classical laboratory situation violating the bell inequalities with , exactly ` in the same way ' as its violations by the epr experiments , '' helv .acta , * 64 * , pp .123 ( 1991 ) .rauch , h. , `` neutron interferometric tests of quantum mechanics , '' helv .acta , * 61 * , p. 589aerts , d. , quantum particles as conceptual entities . a possible explanatory framework for quantum theory , "foundations of science , * 14 * , pp . 361411 ( 2009 ). aerts , d. , `` interpreting quantum particles as conceptual entities , '' int. j. theor .phys . , * 49 * , pp . 29502970 ( 2010 ). smets , s. , `` the modes of physical properties in the logical foundations of physics , '' logic and logical philosophy , * 14 * , pp . 3753 ( 2005 ) .
the purpose of this article is threefold . firstly , it aims to present , in an educational and non - technical fashion , the main ideas at the basis of aerts _ creation - discovery view _ and _ hidden measurement approach _ : a fundamental explanatory framework whose importance , in this author s view , has been seriously underappreciated by the physics community , despite its success in clarifying many conceptual challenges of quantum physics . secondly , it aims to introduce a new quantum machine that we call the _ -quantum machine _ which is able to reproduce the transmission and reflection probabilities of a one - dimensional quantum scattering process by a dirac delta - function potential . the machine is used not only to demonstrate the pertinence of the above mentioned explanatory framework , in the general description of physical systems , but also to illustrate ( in the spirit of aerts -model ) the origin of classical and quantum structures , by revealing the existence of processes which are neither classical nor quantum , but irreducibly intermediate . we do this by explicitly introducing what we call the _ -model _ and by proving that its processes can not be modelized by a classical or quantum scattering system . the third purpose of this work is to exploit the powerful metaphor provided by our quantum machine , to investigate the intimate relation between the concept of _ potentiality _ and the notion of _ non - spatiality _ , that we characterize in precise terms , introducing for this the new concept of _ process - actuality_.
several researchers are working to solve the shape from shading(sfs ) problem , but there are no solutions that give good results on real images . even with complex synthetic images , several constraints must be defined .the methods of sfs can be classified into three categories : the first category concerns the local resolution methods ( pontland , lee and ronselfd , tsai and shah ) , in which , the computing of the surface orientation of each pixel is principally given by the gray level information of its neighbors .the second category is the global resolution methods in which the resolution is calculated using all the pixels of the image , by passing over each pixel several times. the third category is mixed methods .in this paper we are interested to the needle map integration methods , in which the resolution is given by two steps : the generation and then the integration of the needle map for the surface reconstruction ( each step can be local global or mixed .the needle map represents the set of normals corresponding to all the pixels of image .we propose a method for generating the needle map by using a machine learning method .this method is composed of three phases : the first phase is the generation of the 3d object .the second phase is the preparation of the database examples ( offline).in the third phase we use the database examples to generate the needle map of each pixel ( online ) , it is belong to the local resolution methods . among the methods dealing with shape from shading ,we mention the puntlands method , he proposed the first method to solve sfs problem using a local method .he chooses to direct all points of the image using the angles slant and tilt .there is also the method of lee and ronselfd that follows the puntlands method , but using a perspective camera and light source at the infinity .a.sethian is the first who applied the level set method to solve the sfs problem .his method uses the depth function ( z(x , y ) ) to generate the levels curves , and also jean denis used this category where he suggested a perspective camera and light source at the infinity . like most methods proposed in sfs problem ,we suppose that : the surface is smooth , the image taken by a parallel projection camera , light source at the infinity and regular surface ( lambertian ) . the paper is structured as follows : after the introduction section that offers a range of methods for solving the sfs problem with their classification .section two summarizes the basic concepts for the image formation and explains the mathematical equations used .the third section details the proposed technique based on the machine learning method .the last experimenting section gives the obtained results after applying our approach on synthetic and real images .sfs is the process of generating a three - dimensional shape using a single two - dimensional image , which is the reverse process of the image formation . in this sectionwe will define some basic notion and notations ( see fig.[fig : def ] ) : * normal vector : is the perpendicular vector to the surface in a point ( x , y ) .* light source vector : is the vector which represents the direction of the light source and its intensity , the direction of s is towards the light source .* needle map : represents the set of the normals corresponding to all the pixels of image . *slant ( ) : is the angle between the origin ( the x - axis ) and the projection of the normal on the plane ( x , y ) . *tilt ( ) : is the angle between the normal vector and z - axis ( see figure [ fig : def ] ) . *boundary condition : the characteristic ( solution ) of the pixels located at the boundaries of the object is known . *neumann boundary condition : the solution in neumann boundary condition is the gradient ( in our case the gradient of the depth z ) . * albedo : reflectance factor ( ratio of light emitted and light received ) . *lambert surface : surface that reflects radiation uniformly in all directions .* gradient of depth ( z):depth variation ( the derivative of z with respect x and y ) .* singular point : pixel has a maximum illuminance . [fig : def ] the image formation consist to study the generation of images from objects ( see fig.[fig : defmodel ] ) .this process is used in cameras .we can generate an image form a 3d object by using the basic equation of the images formation for more details see .the illumination e is : [ fig : defmodel ] the luminance ( brightness ) of a lambert surface can be expressed as follows : the illumination ( e ) can expressed using the angle between the two vectors(n and s ) : k is a constant , is the albedo of the surface . in a singular pointboth vector n and s are equal , , the illumination in this point is maximum .the equation ( [ eq : eq_8 ] ) can be written as : our method belongs to the category of integration methods which consists of reconstructing the scene in two steps .generate the needle map then construct the scene from it .the following diagram in figure [ fig : schema1 ] shows the different steps for the reconstruction .[ fig : schema1 ] in our approach , we propose the generation of the needle map using the slant and tilt angles . we can compute the normal vector from the two angles ( tilt and slant ) using the formula : as explained previously , the tilt angle is equal to the angle between the normal n and the light source .s , .we propose a method to compute the slant ( the angle between the project vector of n on image plane and the vector n ) using machine learning under some constraints .we assume that the surface is lambertian , differentiable , continuous with a punctual light source located at infinity , taken by a camera with a parallel projection .our method is divided into three phases : the first phase is the generation of the 3d shape from mathematic functions .the second phase is the preparation of the examples database ( offline ) .third phase uses the database examples to generate the needle map of the test images ( online ) . in this phasewe will generate the training data , these data contain the inputs ( gray levels ) and outputs ( slant ) . in order to generate the data we need the 3d objects and their gray level images . in this workwe will create 3d objects by two methods : the first method is based on the generation of 3d surface from mathematic functions .figures [ fig : app1 ] , [ fig : app2 ] and [ fig : app4 ] show three 3d objects generated by the functions and respectively .the second method uses 3d surfaces defined by their depth ( z ) for example the silt ( figure.[fig : app5 ] ) mozart ( figure[fig : app6 ] ) and penny ( figure[fig : app7 ] ) . after the generation of the depths ( z of each pixels ) we will calculate the tilt and slant angles , and then we will generate the image corresponding to each 3d object. we will finally get results as a set of pixels and corresponding angles .the purpose of the offline phase is to create a database containing several examples .each example contains : * input : * * the gray level of the pixel ( i , j ) * * the gray level and the slant of three adjacent neighbors of pixel ( i , j ) * output : the slant of pixel ( i , j ) for example in figure [ fig : db ] , we have a pixel ( i , j ) and three adjacent neighbors , the database contains eight fields , seven input and one output , the inputs are the gray level of and the slant of .the slant of(i , j ) is the output .[ fig : db ] in our approach we will use the neumann boundary condition , so it is assumed that the normal of edge around the treated area in the image is known ( figure [ fig : bc ] ) .all pixels with three adjacent neighbors known slant are ready to find its slant .the search is done by the euclidean distance between the ready pixels and each element of examples database , and choose the minimum distances . such as : * : is a ready pixel .* : is a database example .* : is the gray level of neighbor of p(i , j ) * : is the gray level of neighbor of e(i , j ) * : is the slant of neighbor of p(i , j ) * : is the slant of neighbor of e(i , j ) * is the gray level of p(i , j ) [ fig : bc ] there are several methods to integrate the normal field , j.denis shows some iterative and no - iterative methods of the normal field integration .iterative methods are slow but give better results .we use in the following the method of horn and brooks , it is simple , easy to implement and gives a good results .the equation of integration is follows : + p and q are calculated using slant( ) and tilt( ) .the results of our approach is depend to the learning phase ( offline ) , we will test on many examples database generated from different functions .figures [ fig : app1 ] , [ fig : app2 ] and [ fig : app4 ] show four objects and corresponding images , these objects are generated using mathematics functions , and .figures [ fig : app5 ] , [ fig : app6 ] and [ fig : app7 ] shows three objects generated from the depths matrix and corresponding images . in all this cases the images are generated from objects using equation [ eq : eq_8 ] .[ fig : app1 ] [ fig : app2 ] [ fig : app4 ] [ fig : app5 ] [ fig : app6 ] [ fig : app7 ] we study six different cases of database , for each case we apply different function for the off - line and the on - line process : first we will generate examples of function f1 , knowing that the examples are disjoint , function f1 can be generate 4625 examples .the test result on the image generated by the function f2 is shown in figure [ fig : res1 ] .the average of the distance of all pixels equals 0.07 [ fig : res1 ] now we use the functions f1 and f2 to generate the examples database and f3 for the test . f1 and f2 generate 9249 example .the result is show in figure [ fig : res2 ] the average of distance is 0.064 .so there are pixels that do not have examples nearest .[ fig : res2 ] in this case we will use f1 , f2 and f3 in the database and the silt image [ fig : app5 ] for the test , the result is shown in figure [ fig : res3 ] .the value of the average of distance is 0.1 .note that whenever the image is becoming more complicated the average distance increases .[ fig : res3 ] the mozart and penny images contain more detail than the others , so there are several pixels that do not have a nearest example .figure [ fig : res4 ] shows the test result on the two images .the average distance is higher 0.19 for the image of mozart and 0.21 for the image of penny .the distance interval is between 0 and 1 , it represents the difference between the gray level and the azimuth of a pixel in the test image and the nearest example in database .the distance is equal is to 1 if the difference equals .the distance of penny s image equal 0.21 and equal also 0.66 rad .[ fig : res4 ] the advantage of our approach is that we can add the examples to the database according to the use case .the result is depend on the the examples in the database .now we take the best case , a database contain all possible examples , we will put all functions ( f1,f2,f3,vase , mozart and penny ) in the database , they generate 23004 examples .we test to the three images vase , mozart and penny , the average of the distance equal ( 0.0000554 , 0.0004 , 0.027 ) , the results are shown in figures [ fig : res21 ] [ fig : res22 ] [ fig : res23 ] .[ fig : res21 ] [ fig : res22 ] [ fig : res23 ] in the above results we tested our approach on synthetic images , in which the boundary conditions are known . to test our approach on images without boundary condition , we will use the edge of the object ( there are several methods for detecting the edge ) .we assume that the projection is perpendicular to the tangent toward the outside .the results shown in figures [ fig : res31 ] and [ fig : res32 ] are obtained with any additional information .[ fig : res31 ] [ fig : res32 ]most of the local resolution methods does not give a good results , because it is difficult to determine the depth variation from the gray level variation .the solutions of the local resolution are generally complex . in this work we proposed a simple local resolution method using machine learning .it gives very acceptable results compared with other local resolution methods .the advantage of our approach is in the `` learning phase '' .the examples database can be specialized ( using same object types ) , i.e. we can create a database according to the use case . in the future work we will improve this approach , so that we can test it on more complex images , and minimize the number of constraints .j. ma p. zhao ; b. gong , a shape - from - shading method based on surface reflectance component estimation .fuzzy systems and knowledge discovery ( fskd ) , 2012 9th international conference page(s ) : 1690 - 1693 , may 2012 b. kunsberg and s.w .zucker the differential geometry of shape from shading : biology reveals curvature structure . computer vision and pattern recognition workshops ( cvprw ) , 2012 ieee computer society conference , pages : 39 - 46 , june 2012 w. fan and k. wang and f. cayre and x. zhang 3d lighting - based image forgery detection using shape - from - shading .signal processing conference ( eusipco ) , 2012 proceedings of the 20th european , page(s ) : 1777 - 1781 , august 2012 j.d , durou and j.f , aujol integration of a normal field in the case of discontinuities.emmcvpr 09 proceedings of the 7th international conference on energy minimization methods in computer vision and pattern recognition , 2010
the aim of shape from shading ( sfs ) problem is to reconstruct the relief of an object from a single gray level image . in this paper we present a new method to solve the problem of sfs using machine learning method . our approach belongs to local resolution category . the orientation of each part of the object is represented by the perpendicular vector to the surface ( normal vector ) , this vector is defined by two angles slant and tilt , such as the tilt is the angle between the normal vector and z - axis , and the slant is the angle between the the x - axis and the projection of the normal to the plane . the tilt can be determined from the gray level , the unknown is the slant . to calculate the normal of each part of the surface ( pixel ) a supervised machine learning method has been proposed . this method divided into three steps : the first step is the preparation of the training data from 3d mathematical functions and synthetic objects . the second step is the creation of database of examples from 3d objects ( off - line process ) . the third step is the application of test images ( on - line process ) . the idea is to find for each pixel of the test image the most similar element in the examples database using a similarity value . 0= 0= integration method , machine learning , needle map , shape from shading .
when a domain is partitioned into elements , a function in a sobolev space like or has continuity constraints across element interfaces , e.g , the former has tangential continuity , while the latter has continuity of its normal component . if these continuity constraints are removed from the space , then we obtain `` broken '' sobolev spaces . discontinuous petrov galerkin ( dpg ) methods introduced in used spaces of such discontinuous functions in broken sobolev spaces to localize certain computations .the studies in this paper begin by clarifying this process of breaking sobolev spaces .this process , sometimes called hybridization , has been well studied within a discrete setting .for instance , the hybridized raviart - thomas method is obtained by discretizing a variational formulation and then removing the continuity constraints of the discrete space , i.e. , by discretizing first and then hybridizing .in contrast , in this paper , we identify methods obtained by hybridizing first and then discretizing , a setting more natural for dpg methods .we then take this idea further by connecting the stability of formulations with broken spaces and unbroken spaces , leading to the first convergence proof of a dpg method for maxwell equations .the next section ( section [ sec : break ] ) is devoted to a study of the interface spaces that arise when breaking sobolev spaces .these infinite - dimensional interface spaces can be used to connect the broken and the unbroken spaces .the main result of section [ sec : break ] , contained in theorem [ thm : duality ] , makes this connection precise and provides an elementary characterization ( by duality ) of the natural norms on these interface spaces .this theorem can be viewed as a generalization of a similar result in .having discussed breaking spaces , we proceed to break variational formulations in section [ sec : break - forms ] .the motivation for the theory in that section is that some variational formulations set in broken spaces have another closely related variational formulation set in their unbroken counterpart .this is the case with all the formulations on which the dpg method is based .the main observation of section [ sec : break - forms ] is a simple result ( theorem [ thm : hybrid ] ) which in its abstract form seems to be already known in other studies . in the dpg context, it provides sufficient conditions under which _ stability of broken forms follow from stability of their unbroken relatives_. as a consequence of this observation , we are able to simplify many previous analyses of dpg methods .the content of sections [ sec : break ] and [ sec : break - forms ] can be understood without reference to the dpg method . a quick introduction to the dpg methodis given in section [ sec : dpg ] , where known conditions needed for _ a priori _ and _ a posteriori _ error analysis are also presented .one of the conditions is the existence of a fortin operator . anticipating the needs of the maxwell application , we then present , in section [ sec : fortin ] , a sequence of fortin operators for and , all on a single tetrahedral mesh element .they are constructed to satisfy certain moment conditions required for analysis of dpg methods .they fit into a commuting diagram that helps us prove the required norm estimates ( see theorem [ thm : fortin ] ) .the time - harmonic maxwell equations within a cavity are considered afterward in section [ sec : maxwell ] .focusing first on a simple dpg method for maxwell equation , called the primal dpg method , we provide a complete analysis using the tools developed in the previous section .to understand one of the novelties here , recall that the wellposedness of the maxwell equations is guaranteed as soon as the excitation frequency of the harmonic wave is different from a cavity resonance .however , this wellposedness is not directly inherited by most standard discretizations , which are often known to be stable solely in an asymptotic regime . the discrete spaces used must be sufficiently fine before one can even guarantee solvability of the discrete system , not to mention error guarantees . furthermore, the analysis of the standard finite element method does not clarify how fine the mesh needs to be to ensure that the stable regime is reached .in contrast , the dpg schemes , having inherited their stability from the exact equations , are stable no matter how coarse the mesh is .this advantage is striking when attempting robust adaptive meshing strategies .another focus of section [ sec : maxwell ] is the understanding of a proliferation of formulations for the maxwell boundary value problem .one may decide to treat individual equations of the maxwell system differently , e.g. , one equation may be imposed strongly , while another may be imposed weakly via integration by parts .mixed methods make a particular choice , while primal methods make a different choice . we will show ( see theorem [ thm : maxwellcycles ] ) that the stability of one formulation implies the stability of five others .the proof is an interesting application of the closed range theorem .however , when the dpg methodology is applied to discretize these formulations , the numerical results reported in section [ sec : numer ] , show that the various methods do exhibit differences .this is because the functional settings are different for different formulations , i.e. , convergence to the solution occurs in different norms .section [ sec : numer ] also provides results from numerical investigations on issues where the theory is currently silent .in this section , we discuss precisely what we mean by breaking sobolev spaces using a mesh. we will define _ broken spaces _ and _ interface spaces _ and prove a duality result that clarifies the interplay between these spaces .we work with infinite - dimensional ( but mesh - dependent ) spaces on an open bounded domain with lipschitz boundary .the mesh , denoted by , is a disjoint partitioning of into open elements such that the union of their closures is the closure of the collection of element boundaries for all , is denoted by .we assume that each element boundary is lipschitz .the shape of the elements is otherwise arbitrary for now .we focus on the most commonly occurring first order sobolev spaces of real or complex - valued functions , namely , , and .their _ broken _ versions are defined , respectively , by as these broken spaces contain functions with no continuity requirements at element interfaces , their discretization is easier than that of globally conforming spaces . to recover the original sobolev spaces from these broken spaces , we need traces and interface variables .first , let us consider these traces on each element in . here and throughout denotes the unit outward normal on and is often simply written as .both and these traces are well defined almost everywhere on , thanks to our assumption that is lipschitz .the operators , and perform the above trace operation element by element on each of the broken spaces we defined previously , thus giving rise to linear maps it is well known that these maps are continuous and surjective ( for the standard definitions of the above codomain sobolev spaces , see e.g. , ) .an element of is expressed using notations like or ( even when itself has not been assigned any separate meaning ) that are evocative of their dependence on the interface normals .similarly , the elements of the other trace map codomains are expressed using notations like next , we need spaces of _ interface _ functions .we use the above trace operators to define them , after cautiously noting two issues that can arise on an interface piece shared by two mesh elements in .first , functions in the range of when restricted to is generally multivalued , and we would like our interface functions to be single valued in some sense .second , the range of the remaining trace operators consists of functionals whose restrictions to are in general undefined .the following definitions circumvent these issues . if consists of a single element , then equals , but in general ( and similar remarks apply for the other spaces ) .we norm each of the above interface spaces by these quotient norms : [ eq : norms ] these are indeed quotient norms because the infimums are over cosets generated by kernels of the trace maps .e.g. , if is any function in such that , then the set where the minimization is carried out in , namely equals the coset , where .note that every element of this coset is an extension of .for this reason , such norms are also known as the `` minimum energy extension '' norms . for an alternate way to characterize the interface spaces ,see . the quotient norm in appeared in the literature as early as .the word `` hybrid '' that appears in their title was used to refer to situations where , to quote , `` the constraint of interelement continuity has been removed at the expense of introducing a lagrange multiplier . ''the quote also summarizes the discussion of this section well .the above definitions of our four interface spaces are thus generalizations of a definition in and each can be interpreted as an appropriate space of lagrange multipliers .we now show by elementary arguments that the quotient norms on the two pairs of trace spaces are dual to each other .the duality pairing in any hilbert space , namely the action of a linear or conjugate linear ( antilinear ) functional on is denoted by and we omit the subscript in this notation when no confusion can arise .we also adopt the convention that when taking supremum over vector spaces ( such as in the next result ) the zero element is omitted tacitly .[ lem : duality ] the following identities hold for any in and any in [ eq : sup ] the next two identities hold for any in and any in . the first identity is proved using an equivalence between a dirichlet and a neumann problem .the dirichlet problem is the problem of finding , given , such that -{\mathop{\mathrm{grad}}}(\text{div } \sigma ) + \sigma = 0 , & \quad\text{in } k. \end{array } \right.\ ] ] the neumann problem finds satisfying - \text{div } ( { \mathop{\mathrm{grad}}}w ) + w = 0 , & \quad\text{in } k. \end{array } \right.\ ] ] it is immediate that problems and are equivalent in the sense that solves if and only if solves and moreover .it is also obvious from the calculus of variations that among all -extensions of , the solution of has the minimal norm ( i.e. , is the `` minimum energy extension '' referred to earlier ) , so where we used the variational form of in the last step .( here and throughout , we use to denote the inner product in or its cartesian products . )this proves the first equality of .next , analogous to and , we set up another pair of dirichlet and neumann problems .the first problem is to find in , given any , such that the second is to find in such that the solution of has the minimal norm among all extensions of into , i.e. , .thus so taking the supremum over all in , we obtain since the reverse inequality is obvious from the definition of the quotient norm in the denominator , we have established the second identity of . to prove , we begin , as above ,by observing that is the solution to the neumann problem if and only if solves the dirichlet problem .moreover , .hence where we have used the variational form of in the last step .the proof of can now be completed as before .we follow exactly the same reasoning for the case , summarized as follows : on one hand , the norm of an interface function equals the norm of a minimum energy extension , while on the other hand , it equals the norm of the inverse of a riesz map applied to a functional generated by the interface function .the minimum energy extension that yields the interface norm is now the solution of the dirichlet problem of finding satisfying while the inverse of the riesz map applied to the functional generated by is obtained by solving the neumann problem again , the two problems are equivalent in the sense that solves if and only if solves .moreover , . hence the proof of follows from this .the proof of is similar and is left to the reader .let us return to the product spaces like and .any hilbert space that is the cartesian product of various hilbert spaces is normed in the standard fashion , where denotes the -component of any in .the dual space is the cartesian product of component duals .writing an as where , it is elementary to prove that i.e. , some of our interface spaces have such functionals , e.g. , the function in gives rise to where is a functional acting on which is the sum of component functionals acting on over every .other functionals like are defined similarly .we are now ready to state a few basic relationships between the interface and broken spaces . as usual, we define , and [ thm : duality ] the following identities hold for any interface space function in in in and in .[ eq : duality ] for any broken space function and , [ eq : conformequiv ] the identities immediately follow from lemma [ lem : duality ] and .the proofs of the three equivalences in are similar , so we will only detail the last one .if is in , then choosing any such that and integrating by parts over entire , because of the boundary conditions on on .now , if the left hand side is integrated by parts again , this time element by element , then we find that conversely , given that for any in , consider . as a distribution , acts on , and satisfies where we have integrated by parts element by element and denoted this notation also serves to emphasize that the term appearing on the right - hand side above is a derivative taken piecewise , element by element .clearly is in for all since , so the distribution is in . having established that , we may now integrate by parts to get for all .this shows that the trace , i.e , . while and are dual to each other , our interface spaces and are _ not _ dual to each other in general .equivalences analogous to hold with interface subspaces by a minor modification of the arguments in the proof in theorem [ thm : duality ] , we can prove that for any and , [ eq : conformequivo ] goal in this section is to investigate in what sense a variational formulation can be reformulated using broken spaces without losing stability . we will describe the main result in an abstract setting first and close the section with simple examples that use the results of the previous section .let and denote two hilbert spaces and let be a closed subspace of . for definiteness, we assume that all our spaces in this section are over ( but our results hold also for spaces over ) . in the examples we have in mind , will be a broken space , while will be its unbroken analogue ( but no such assumption is needed to understand the upcoming results abstractly ) .the abstract setting involves a continuous sesquilinear form satisfying the following assumption . [ asm : a0 ] there is a positive constant such that it is a well - known result of babuka and neas that assumption [ asm : a0 ] together with triviality of guarantees wellposedness of the following variational problem : given ( the space of conjugate linear functionals on ) , find satisfying when is non - trivial , we can still obtain existence of a solution provided the load functional satisfies the compatibility condition for all . in , the _ trial space_ need not be the same as the _ test space _ . to describe a `` broken '' version of, we need another hilbert space , together with a continuous sesquilinear form . in applications and will usually be set to a broken sobolev space and an interface space , respectively .define clearly is continuous , where is a hilbert space under the cartesian product norm . now consider the following new broken variational formulation : given , find and satisfying the close relationship between problems and is readily revealed under the following assumption . [asm : hybrid ] the spaces and satisfy and there is a positive constant such that under this assumption , we present a simple result which shows that the broken form inherits stability from the original unbroken form . a very similar such abstract result was formulated and proved in ( * ? ? ?* appendix a ) and used for other applications .our proof is simple , unsurprising , and uses the same type of arguments from the early days of mixed methods : stability of a larger system can be obtained in a triangular fashion by first restricting to a smaller subspace and obtaining stability there , followed by a backsubstitution - like step .below , denotes the smallest number for which the inequality holds for all and all .[ thm : hybrid ] assumptions [ asm : a0 ] and [ asm : hybrid ] imply where is defined by moreover , if for all and in , then consequently , if , then is uniquely solvable and moreover the solution component from coincides with the solution of .we need to bound and .first , next , to bound , using of assumption [ asm : hybrid ] , using the already proved bound for in the last inequality and combining , from which the inequality of the theorem follows . finally , to prove that , using , which holds if and only if . note that in the proof of the inf - sup condition , we did not fully use .we only needed .the reverse inclusion was needed to conclude that .it is natural to ask , in the same spirit as theorem [ thm : hybrid ] , if the numerical solutions of dpg methods using discretizations of the broken formulations coincide with those of discretizations of the original unbroken formulation .a result addressing this question is given in ( * ? ? ?* theorem 2.6 ) . in the remainder of this section ,we illustrate how to apply this theorem on some examples .[ eg : primal ] suppose and satisfies [ eq : poisson ] the standard variational formulation for this problem , finds in such that this form is obtained by multiplying by and integrating by parts over the entire domain .if on the other hand , we multiply by a and integrate by parts element by element , then we obtain another variational formulation proposed in : solve for in as well as a separate unknown ( representing the fluxes along mesh interfaces ) satisfying we can view this as the broken version of by setting for these settings , the conditions required to apply theorem [ thm : hybrid ] are verified as follows . noting that , an application of theorem [ thm : hybrid ] implies that problem is wellposed .this wellposedness result also shows that is uniquely solvable with a more general right - hand side in .an alternate ( and longer ) proof of this wellposedness result can be found in .the classical work of also uses the spaces and , but proceeds to develop a bubnov - galerkin hybrid formulation different from the petrov - galerkin formulation . ///[ eg : diffus ] considering a model problem involving diffusion , convection , and reaction terms , we now show how to analyze , all at once , its various variational formulations .the diffusion coefficient is a symmetric matrix function which is uniformly bounded and positive definite on , the convection coefficient is which satisfies and reaction is incorporated through a non - negative .the classical form of the equations on are and ( for some given and ) together with the boundary condition .this can be written in operator form using we begin with the formulation closest to the classical form . ;\(p ) ( s ) ; ( s ) ( u ) ; ( u ) ( d ) ; ( d) ( p ) ; ( u) ( m ) ; ( m) ( s ) ; strong form : : let be a group variable .set spaces by and consider the problem of finding given , satisfying we can trivially fit this into our variational framework by setting to unlike the remaining formulations below , there is no need to discuss a broken version of the above strong form as the test space already admits discontinuous functions .the next formulation is often derived directly from a second order equation obtained by eliminating from the strong form .primal form : : first , set spaces by then , with set to and set to the standard primal formulation is and its broken version is .next , consider the formulation derived by multiplying each equation in the strong form by a test function and integrating _ both _ equations by parts , i.e. , both equations are imposed weakly .it was previously studied in , but we can now simplify its analysis considerably using theorem [ thm : hybrid ] .ultraweak form : : set group variables , , , and formulations and with and are of the ultraweak type .the fourth formulation , well - known as the mixed form , is derived by weakly imposing ( via integration by parts ) the first equation of the strong form , but strongly imposing the second equation . dual mixed form : : set the spaces by the well - known mixed formulation is then with , its broken version is with set to .note that the well - known discrete hybrid mixed method is also derived from .that method however works with a bubnov - galerkin formulation obtained by breaking both the trial and the test components , while above we have broken only the test space .the last formulation in this example reverses the roles by weakly imposing the second equation of the strong form and strongly imposing the first equation : mixed form : : set the dual mixed formulation is with , and its broken version is with set to .the variational problem with is sometimes called the _ primal mixed _ form to differentiate it with the _ dual mixed _ form given by .the broken formulation with and was called the _ mild weak dpg formulation _ in .their analysis can also be simplified now using theorem [ thm : hybrid ] . in order to apply theorem [ thm : hybrid ] to all these formulations ,we need to verify assumption [ asm : a0 ] .this can be done for all the formulations at once , because the six implications displayed in figure [ fig : impl ] are proved in for the model problem of this example ( thus making the five statements in figure [ fig : impl ] equivalent ) .we will not detail this proof here because we provide full proofs of similar implications for maxwell equations in section [ sec : maxwell ] ( and this example is simpler than the maxwell case ) .to apply these implications for the current example , we pick a formulation for which assumption [ asm : a0 ] is easy to prove : that the primal form is coercive follows immediately by integration by parts and the poincar inequality ( under the simplifying assumptions we placed on the coefficients ) .this verifies assumption [ asm : a0 ] for the primal form , which in turn verifies it for all the formulations by the above chain of equivalences .assumption [ asm : hybrid ] can be immediately verified for all the formulations using either or .together with the easily verified triviality of in each case , we have proven the wellposedness of all the formulations above , including the broken ones.///in this section , we quickly introduce the dpg method , indicate why the broken spaces are needed for practical reasons within the dpg method , and recall known abstract conditions under which an error analysis can be conducted .let and be hilbert spaces and let be a continuous sesquilinear form . in the applications we have in mind , will always be of the form ( but we need not assume it for the theory in this section ) .the variational problem is to find in , given , satisfying the dpg method uses finite - dimensional subspaces and .the test space used in the method is a subspace of _ approximately optimal test functions _ computed for any arbitrarily given trial space .it is defined by where is given by here is the inner product in , hence by riesz representation theorem on , the operator is well defined .the discrete problem posed by the dpg discretization is to find satisfying for practical implementation purposes , it is important to note that can be easily and inexpensively computed via provided the space is a subspace of a broken space . then becomes a series of small decoupled problems on each element . for _ a posteriori _error estimation , we use an estimator that actually works for any in , computed as follows .( note that need not equal the solution of . ) first we solve for in by again , this amounts to a local computation if is a subspace of a broken space .then , set when is a broken space , the element - wise norms of serve as good error estimators .the notations and ( without tilde ) refer to similarly computed quantities with in place of .an analysis of errors and error estimators of the dpg method can be conducted using the following assumption introduced in . in accordance with the traditions in the theory of mixed methods , we will call the operator in the assumption a _fortin operator_. [ asm : pi ] there is a continuous linear operator such that for all and all , [ thm : dpg ] suppose assumption [ asm : pi ] holds .assume also that there is a positive constant such that and the set equals .then the dpg method is uniquely solvable for and the _ a priori _ error estimate holds , where is the unique exact solution of . moreover , we have the following inequalities for any in and its corresponding error estimator , with the data - approximation error [ eq : estimator ] here and are any constants that satisfy and , respectively , for all and . to apply the theorem to specific examples of dpg methods , we must verify .this will usually be done by appealing to theorem [ thm : hybrid ] and verifying assumptions [ asm : a0 ] and [ asm : hybrid ] .the previous sections provided tools for verifying assumptions [ asm : a0 ] and [ asm : hybrid ] . in the next section, we will provide some tools to verify the remaining major condition in the theorem , namely assumption [ asm : pi ] .a proof of theorem [ thm : dpg ] is available in existing literature .the _ a priori _ error bound was proved in .the inequalities of , useful for _ a posteriori _error estimation , were proved in .in particular , a reliability estimate slightly different from ( with worse constants ) was proved in , but the same ideas yield easily ( for example , cf .* proof of lemma 3.6 ) ) .the operator is an approximation to an idealized trial - to - test operator given by if is the operator defined by the form satisfying for all and , then clearly , where is the riesz map defined by . in some examples , it is possible to analytically compute and then one may substitute with the _ exactly optimal test space _ . the above - mentioned trial - to - test operator should not be confused with another trial - to - test operator of ( also cf . ) : application of requires the inversion of the dual operator .the fortin operator appearing in assumption [ asm : pi ] is problem specific since it depends on the form and the spaces .however , there are a few fortin operators that have proved widely useful for analyzing dpg methods , including one for and another for , both given in . in this section, we complete this collection by adding another operator for intimately connected to the other two operators. its utility will be clear in a subsequent section .since the fortin operators for dpg methods are to be defined on broken sobolev spaces , their construction can be done focusing solely on one element .we will now assume that the mesh is a geometrically conforming finite element mesh of tetrahedral elements .let denote the set of polynomials of degree at most on a domain and let denote the ndlec space . for domains , ,let denote the raviart - thomas space .we use to denote the orthogonal projection onto . from now on , let us use to denote a generic constant independent of . its value at different occurrences may differ and may possibly depend on the shape regularity of and the polynomial degree .[ thm : fortin ] on any tetrahedron , there are operators such that the norm estimates [ eq : bdd ] hold , the diagram commutes , and these identities hold for any , , and : note that the duality pairings above must be taken in the appropriate spaces , as in . to provide a constructive proof of theorem [ thm : fortin ] , we will exhibit fortin operators .we will use the exact sequence properties of the finite element spaces appearing as codomains of the operators in the theorem .we can not use the canonical interpolation operators in these finite element spaces because they do not satisfy .hence we will restrict the codomains of our operators to the following subspaces whose construction is motivated by zeroing out the unbounded degrees of freedom . here denotes a tangent vector along the underlying edge , for all faces of , and and are defined as follows . to simplify notation ,let ( the space of functions on that are polynomials of degree at most on each face of ) and let .let denote the -orthogonal complement of in and let denote the -orthogonal complement of in .the following result is proved in ( * ? ? ?* lemma 3.2 ) .[ lem : grad ] for any , there is a unique in satisfying [ eq : pi0 ] we define as a minor modification of the analogous operator in . given any ,first compute its mean value on the boundary then split where has zero mean trace , and finally define [ lem : pigrad ] satisfies and for any since , equations and immediately yield and . to prove the norm estimate , note that standard scaling arguments imply for all in . combining and ,we get while combining and , these estimates together prove .the next lemma is proved in ( * ? ? ?* lemma 3.3 ) .it defines exactly as in .[ lem : div ] any satisfying vanishes .moreover , for any , there is a unique function in satisfying . it also satisfies .the remaining operator will be defined after the next result .it is modeled after the previous two lemmas , but requires considerably more work .[ lem : uniq ] any satisfying [ eq : pihcurltmp ] vanishes . integrating by parts twice and using , we have in addition , by stokes theorem applied to one face of , we have since on all edges by the definition of .the definition of also gives since equations , and together imply since we thus find that on this implies that the tangential component of on , namely , has vanishing surface curl , so it must equal a surface gradient , i.e. , for some .moreover , since vanishes on all edges , may be chosen to be of the form for some , where is the product of all barycentric coordinates of that do not vanish a.e .on . to use the remaining ( as yet unused ) condition in the definition of , note that the tangential component of the coordinate vector , namely is in . combining this with , we find that for all and any , where the sums run over all faces of . for any , the function is in the raviart - thomas space on the closed manifold denoted by .( note that unlike , this space consists of functions with the appropriate compatibility conditions across edges of . )the surface divergence map is surjective .hence the term appearing in spans all of as and are varied . choosing and so that , we conclude that vanishes and hence , i.e. , next , setting in and integrating by parts , we obtain from and , it follows that is in , and furthermore , satisfies and .hence , by lemma [ lem : div ] , vanishes .thus and consequently for some furthermore , by , we may choose for some , where is the product of all barycentric coordinates of .then implies it now follows from the surjectivity of that , and in turn , vanishes on .the next lemma defines the operator . it will be useful to observe now that for any , indeed , while the forward implication is obvious , the converse follows from .this shows that the condition that appears both in the definition of and in above , actually amounts to just one constraint .[ lem : picurl ] given any , there is a unique in satisfying . we need to estimate . first , note that since is a one - dimensional space of constant functions on , the tangential component of any is a polynomial of degree at most on each edge , so represents constraints per edge .hence , counting the number of constraints in the definition of , where we have used .thus , next , we count the number of equations in , namely this together with implies that thus , the system , after using a basis , is an matrix system of the form , where is the vector of coefficients in a basis expansion of and is the right - hand side vector made using the given . by lemma [ lem : uniq ] , .hence shows that .the system determining is therefore a square invertible system .[ lem : contains ] for all and , let where is as in .since is constant along edges of , must satisfy along the edges .moreover , due to . hence , integrating by parts on any face of , summing over all faces of and using ,we conclude that therefore , to finish proving that , it only remains to show that for all . butthis is obvious from the fact that is a gradient .next , we need to show that is in .since it is obvious that , it suffices to prove that for all in .note that can be orthogonally decomposed into its subspace and its -orthogonal complement .the latter is a subspace of where holds ( since ) .hence it only remains to prove that holds for in .but stokes theorem shows that actually holds for all cf . .[ lem : commute ] for all and , by lemma [ lem : contains ] , is in . we will now show that satisfies .let and consider by lemma [ lem : picurl ] , satisfies , so the last term above vanishes . integrating the remaining term on the right - hand side by parts , and using , we find that this proves that , i.e. , holds .next , for any we have the last term vanishes due to .moreover , due to , so we have proven that holds as well . hence by lemma [ lem : uniq ] , .this proves . to prove, we proceed similarly and show that is zero . by lemma [ lem :contains ] , we know that , so if we prove that [ eq : pihdivtmp ] then lemma [ lem : div ] would yield . to prove , to prove , this finishes the proof of and hence follows . finally , to prove , let in . for any , integrating by parts , which vanishes by . hence and is proved .the lemmas of this section prove all statements of theorem [ thm : fortin ] except . to prove , we use a scaling argument and the commutativity properties of lemma [lem : commute ] .let denote the unit tetrahedron and let the -fortin operator on , defined as above , be denoted by by the unisolvency result of lemma [ lem : picurl ] , the -boundedness of the sesquilinear forms and ( a consequence of lemma [ lem : duality ] ) , and by finite dimensionality , there is a such that for all .let be the one - to - one affine map that maps onto a general tetrahedron . for , define and .the elementary proofs of the following assertions ( i)(iii ) are left to the reader . 1 . if and only if 2 .there are constants depending only on the shape regularity of ( but not on ) such that 3 . these three statements imply that while this immediately gives the needed estimate for the -part , namely we need to improve the estimate on the curl to finish the proof : for this , we use the commutativity property the required estimate follows from and . before concluding this section , let us illustrate how to use theorem [ thm : fortin ] for error analysis of dpg methods by an example . consider the broken variational problem of example [ eg : primal ] : find such that holds with we want to analyze the dpg method given by with we have already shown in example [ eg : primal ] that the inf - sup condition required for application of theorem [ thm : dpg ] holds ( and ) . hence to obtain optimal error estimates from theorem [ thm : dpg ] , it suffices to verify assumption [ asm : pi ] .we claim that assumption [ asm : pi ] holds with .indeed , by applying element by element .note that here we have used the fact that the discrete spaces have been set so that and is a polynomial of degree at most on each face of ( i.e. , it is in ) , allowing us to apply . applying theorem [ thm : dpg ] , we recover the error estimates for this method , originally proved in .///in this section , we combine the various tools developed in the previous sections to analyze the dpg method for a model problem in time - harmonic electromagnetic wave propagation .consider a cavity , an open bounded connected and contractible domain in , shielded from its complement by a perfect electric conductor throughout its boundary .if all time variations are harmonic of frequency , then maxwell equations within the cavity reduce to these : [ eq : maxwell ] the functions represent electric field , magnetic field , and imposed current , respectively , and denotes the imaginary unit . for simplicitywe assume that the electromagnetic properties and are positive and constant on each element of the tetrahedral mesh . the number denotes a fixed wavenumber . in this sectionwe develop and analyze a dpg method for . eliminating from and, we obtain the following second order ( non - elliptic ) equation where .the standard variational formulation for this problem is obtained by multiplying by a test function , integrating by parts and using the boundary condition : find satisfying for any given .it is well - known that has a unique solution for every whenever is not in the countably infinite set of resonances of the cavity . throughout this section ,we assume .this wellposedness result provides an accompanying stability estimate , namely there is a constant such that for any and satisfying .note that the stability constant may blow up as approaches a resonance .we continue to use to denote a generic mesh - independent constant , which in this section may depend on and as well .the primal dpg method for the cavity problem is obtained by breaking .multiply by a ( broken ) test function and integrate by parts , element by element , to get now set to be an independent interface unknown which is to be found in .this leads to the variational problem with the following spaces and forms : [ eq : dpgmaxwellcavity ] this is the primal dpg formulation for the maxwell cavity problem .the numerical method discretizes the above variational problem using subspaces and defined by [ eq : maxwellxyh ] we have the following error bound for the numerical solution in terms of the mesh size and polynomial degree . [ cor : primalmaxwell ]suppose is the dpg solution given by with forms and spaces set by and let be the exact solution of .then , there exists a depending only on , , and the shape regularity of the mesh such that to apply theorem [ thm : dpg ] , we must verify the inf - sup condition for the broken form . as in the previous examples , as a first step , we verify the inf - sup condition for the unbroken form stated in assumption [ asm : a0 ] . given any , let be defined by for all . then , and imply i.e. , assumption [ asm : a0 ] holds with assumption [ asm : hybrid ] , with is immediately verified by and of theorem [ thm : duality ] .hence theorem [ thm : hybrid ] verifies and also shows that . the only remaining condition to verify before applying theorem [ thm : dpg ] is assumption [ asm : pi ] , which immediately follows by the choice of spaces and theorem [ thm : fortin ] . applying theorem [ thm : dpg ] , we find that . \end{aligned}\ ] ] now , is an extension to of the exact interface solution .moreover , the interface function appearing above can be extended into . since the interface norm is the minimum over all extensions , by standard approximation estimates ( see e.g. , ( * ? ? ? * theorem 8.1 ) ) , \\ & \le c \sum_{k\in { { \varomega_h } } } \bigg [ h_k^{2(s_1 + 1 ) } |e|_{h^{s_1 + 1}(k)}^2 + h_k^{2s_1 } |{\mathop{\mathrm{curl}}}e|_{h^{s_1 + 1}(k)}^2 + \\ & \hspace{2.1 cm } h_k^{2 ( s_2 + 1 ) } |h|_{h^{s_2 + 1}(k)}^2 + h_k^{2s_2 } | { \mathop{\mathrm{curl}}}h |_{h^{s_2 + 1}(k)}^2 \bigg ] , \end{aligned}\ ] ] where and .hence the corollary follows . unlike the standard finite element method , for the dpg method, there is no need for to be `` sufficiently small '' to assert the convergence estimate of corollary [ cor : primalmaxwell ] .this property has been called _ absolute stability _ by some authors and other methods possessing this property are known . in example[ eg : diffus ] , we saw that a single diffusion - convection - reaction equation admits various different formulations .the situation is similar with maxwell equations .first , let us write in operator form using an operator ( analogous to the one in , but now ) defined by as for some given in .however , we will not restrict to right - hand sides of this form as we will need to allow the most general data possible in the ensuing wellposedness studies .we view as an unbounded closed operator on whose domain is it is easy to show that its adjoint ( in the sense of closed operators ) is the closed operator given by whose domain is the following subspace of : classical arguments show that both and are injective . to facilitate comparison , we list all our formulations at once , including the already studied primal form .strong form : : let be a group variable . set that i.e. , considered as a subspace of ( rather than as a subspace of ) .the maxwell problem is to find given such that this fits into our variational framework by setting to primal form for : : this is the same as in , i.e. , with the spaces as set there , with set to and set to the electric primal formulation is and its broken version is .primal form for : : eliminating from , we obtain and a ( possibly nonhomogeneous ) boundary condition on . with this in place of as the starting point and repeating the derivation that led to , we obtain the following magnetic primal form . set with set to and set to , the magnetic primal formulation is and its broken version is .ultraweak form : : this form is obtained by integrating by parts all equations of the strong form . using group variables , and , set + [ eq : maxuw ] + and consider formulations and with and .note that in the definition of , the operator is applied element by element , per our tacit conventions when using the -notation .dual mixed form : : among the two equations in the strong form , if one weakly imposes ( by integrating by parts ) the first equation and strongly imposes the second , then we get the following dual mixed form .set and consider with , its broken version is with set to .mixed form : : reversing the roles above and weakly imposing the second equation while strongly imposing the first , we get another mixed formulation .set and consider with , its broken version is with set to these form a total of six unbroken and five broken formulations , counting the already discussed broken and unbroken electric primal formulation . to analyze the remaining formulations ,let us begin by verifying assumption [ asm : a0 ] for all the unbroken formulations . to this end ,label the statement of assumption [ asm : a0 ] with set to the above - defined as `` '' for all .then ( analogous to the equivalences in figure [ fig : impl ] for the elliptic example ) we now have equivalence of statements as proved next .[ thm : maxwellcycles ] the following implications hold : \matrix[row sep=6mm , column sep=8 mm ] { & \node ( h ) { } ; & \node ( d ) { } ; \\\node ( s ) { } ; & & & \node ( u ) { } ; \\ & \node ( e ) { } ; & \node ( m ) { } ; \\ } ; \draw[to ] ( e ) -- ( s ) ; \draw[to ] ( s ) -- ( u ) ; \draw[to ] ( u ) -- ( d ) ; \draw[to ] ( d ) -- ( h ) ; \draw[to ] ( h ) -- ( s ) ; \draw[to ] ( u ) -- ( m ) ; \draw[to ] ( m ) -- ( e ) ; \end{tikzpicture}\ ] ] we begin with the most substantial of all the implications , which allows us to go from the strongest to the weakest formulation . when there can be no confusion ,let us abbreviate cartesian products of as simply and write for .clearly , is complete in the -norm .it is easy to see that the graph norms and are both equivalent to the -norm , so is a hilbert space in any of these norms .these norm equivalences show that the inf - sup condition holds if and only if the bound implied by shows that the range of is closed . by the closed range theorem for closed operators , range of is closed .since is also injective , it follows that holds with the same constant as in .this in turn implies that the following inf - sup condition holds : thus , to complete the proof of , it suffices to show that for completeness , we now describe the standard argument that shows that one may reverse the order of inf and sup to prove . viewing as a bounded linear operator , we know that it is a bijection because of and .hence is bounded .the right - hand side of equals its operator norm .the left hand side of equals the operator norm of the dual of ( considered as the dual operator of a continuous linear operator with as the pivot space identified to be the same as its dual space ) .the norms of a continuous linear operator and its dual are equal , so follows .let and be in .clearly , is contained in .because of the extra regularity of , we may integrate by parts the last term in the definition of to get that hence using , thus , to finish the proof of , we only need to control using the last term of . using to bound the last term , the proof of is finished .for any set we need to prove which is equivalent to introducing a new variable , we find that hence hence immediately follows from .to prove the inf - sup condition , it is enough to prove for all given any the equation is the same as the system we multiply by the conjugate of , for some and integrate by parts , while we multiply by and solely integrate .the result is adding the above two equations together , we get the primal form where hence the given inf - sup condition implies since , this provides the required bound for . since is also bounded , equation yields a bound for . combining these bounds , follows .to conclude the proof of the theorem , we note that the proofs of the implications , , are similar to the proofs of , , and , respectively .theorem [ thm : maxwellcycles ] verifies assumption [ asm : a0 ] for all the formulations because we know from that holds .assumption [ asm : hybrid ] can be easily verified for all the broken formulations using theorem [ thm : duality ] .assumption [ asm : pi ] can be verified using theorem [ thm : fortin ] .hence convergence rate estimates like in corollary [ cor : primalmaxwell ] can be derived for each of the broken formulations .we omit the repetitive details .in this section , we present some numerical studies focusing on the maxwell example .numerical results for other examples , including the diffusion - convection - reaction example , can be found elsewhere .the numerical studies are not aimed at verifying the already proved convergence results , but rather at investigations of the performance of the dpg method beyond the limited range of applicability permitted by the theorems .all numerical examples presented in this section have been obtained with , a 3d finite element code supporting anisotropic and refinements and solution of multi - physics problems involving variables discretized compatibly with the -- exact sequence of spaces .the code has recently been equipped with a complete family of orientation embedded shape functions for elements of many shapes .the remainder of this section is divided into results from two numerical examples .we numerically solve the time - harmonic maxwell equations setting material data to and to the unit cube . to obtain ,the unit cube was partitioned first into five tetrahedra : four similar ones adjacent to the faces of the cube , and a fifth inside of the cube .we have used the refinement strategy of to generate a sequence of successive uniform refinements . on these meshes ,consider the primal dpg method for , described by , with data set so that the exact solution is the following smooth function . instead of the pair of discrete spaces that we know is guaranteed to work by our theoretical results , we experiment with these discrete spaces : [ eq : xyexpt ] the observed rates of convergence of the error and the residual norm are shown in figure [ fig : smooth_primal_rates_tet ] .the rates are optimal .this suggests that the results of corollary [ cor : primalmaxwell ] may hold with other choices of spaces .results analogous to those in that allow one to reduce the degree of the test space while maintaining optimal convergence rates are currently not known for the the maxwell problem .we also present similar results obtained using cubic meshes using -conforming ndlec hexahedron of the first type .namely , and are set by after revising to where denotes the set of polynomials of degree at most and in the and directions , respectively .the convergence rates reported in figure [ fig : smooth_primal_rates_hex ] are again optimal . before concluding this example , we also report convergence rates obtained from the ultraweak formulation of .the discrete spaces are now set by [ eq : xyexptuw ] recall that the dpg computations require a specification of the -norm . using the observation ( made in the proof of theorem [ thm : maxwellcycles ] ) that the adjoint graph norm is equivalent to the natural norm in , we set in all computations involving the ultraweak formulation .the results reported in figure [ fig : smooth_uw_rates ] again show optimal convergence rates .note that only the errors in the interior variables and ( in -norm ) are reported in the figure . to compute errors in the interface variables, we must compute approximations to fractional norms carefully ( see for such computations in two dimensions ) .since the code does not yet have this capability in three dimensions , we have not reported the errors in interface variables./// are shown.,title="fig:",scaledwidth=18.0% ] are shown.,title="fig:",scaledwidth=18.0% ] are shown.,title="fig:",scaledwidth=18.0% ] are shown.,title="fig:",scaledwidth=18.0% ] + are shown.,title="fig:",scaledwidth=18.0% ] are shown.,title="fig:",scaledwidth=18.0% ] are shown.,title="fig:",scaledwidth=18.0% ] are shown.,title="fig:",scaledwidth=18.0% ] + are shown.,title="fig:",scaledwidth=18.0% ] are shown.,title="fig:",scaledwidth=18.0% ] are shown.,title="fig:",scaledwidth=18.0% ] are shown.,title="fig:",scaledwidth=18.0% ] + are shown.,title="fig:",scaledwidth=18.0% ] are shown.,title="fig:",scaledwidth=18.0% ] are shown.,title="fig:",scaledwidth=18.0% ] are shown.,title="fig:",scaledwidth=18.0% ] + are shown.,title="fig:",scaledwidth=18.0% ] are shown.,title="fig:",scaledwidth=18.0% ] are shown.,title="fig:",scaledwidth=18.0% ] are shown.,title="fig:",scaledwidth=18.0% ] + are shown.,title="fig:",scaledwidth=18.0% ] are shown.,title="fig:",scaledwidth=18.0% ] are shown.,title="fig:",scaledwidth=18.0% ] are shown.,title="fig:",scaledwidth=18.0% ] to illustrate adaptive possibilities of dpg method and the difference between different variational formulations , we now present results from a `` fichera oven '' problem .we start with with the standard domain with a fichera corner obtained by refining a cube into eight congruent cubes and removing one of them .we then attach an infinite waveguide to the top of the oven and truncate it at a unit distance from the fichera corner , as shown in figure [ fig : fichera_microwave ] .setting we drive the problem with the first propagating waveguide mode , which is used for non - homogenous electric boundary condition across the waveguide section .analogous to a microwave oven model , we set the homogeneous perfect electric boundary condition everywhere else on the boundary .the above material data correspond to about 0.8 wavelengths per unit domain . in all the reported computations , we start with a uniform mesh of eight quadratic elements that clearly does not even meet the nyquist criterion .we expect the solution to develop strong singularities at the reentrant corner and edges , but we do not know the exact solution .0.45 plot coordinates ( 1224 , 0.16645e+01 ) ( 2054 , 0.84174e+00 ) ( 10208 , 0.63574e+00 ) ( 39348 , 0.55354e+00 ) ( 77332 , 0.50959e+00 ) ( 110142 , 0.47632e+00 ) ( 150660 , 0.44076e+00 ) ( 245786 , 0.39261e+00 ) ( 358210 , 0.36662e+00 ) ( 708786 , 0.33661e+00 ) ; 0.45 plot coordinates ( 1224 , 0.58556e+00 ) ( 2054 , 0.48856e+00 ) ( 10208 , 0.38958e+00 ) ( 17778 , 0.35059e+00 ) ( 35374 , 0.31035e+00 ) ( 90568 , 0.27091e+00 ) ( 168242 , 0.24020e+00 ) ( 225176 , 0.22554e+00 ) ( 339240 , 0.20098e+00 ) ; first , we report the results from the electric primal formulation , choosing spaces again as in .figure [ fig : microwave_primal ] presents the evolution of the mesh along with the corresponding real part of the first component of electric field .since we do not have the exact solution for this problem , we display convergence history using a plot of the evolution of the computed residual in figure [ fig : microwave_primal_residual ] .( recall that theoretical guidance on the similarity of behaviors of error estimator and the error is provided by theorem [ thm : dpg ] . ) clearly , the figure shows the residual is being driven to zero during the adaptive iteration .next , we solve the same problem using the ultraweak formulation with the spaces set as in and the -norm set to the adjoint graph norm as in the previous example .the convergence history of the residual norm is displayed in figure [ fig : microwave_ultraweak_residual ] .the evolution of the mesh along with the real part of is illustrated in figure [ fig : microwave_ultraweak ] .are shown.,title="fig:",scaledwidth=19.0% ] are shown.,title="fig:",scaledwidth=19.0% ] are shown.,title="fig:",scaledwidth=19.0% ] are shown.,title="fig:",scaledwidth=19.0% ] + are shown.,title="fig:",scaledwidth=19.0% ] are shown.,title="fig:",scaledwidth=19.0% ] are shown.,title="fig:",scaledwidth=19.0% ] are shown.,title="fig:",scaledwidth=19.0% ] + are shown.,title="fig:",scaledwidth=19.0% ] are shown.,title="fig:",scaledwidth=19.0% ] are shown.,title="fig:",scaledwidth=19.0% ] are shown.,title="fig:",scaledwidth=19.0% ] + are shown.,title="fig:",scaledwidth=19.0% ] are shown.,title="fig:",scaledwidth=19.0% ] are shown.,title="fig:",scaledwidth=19.0% ] are shown.,title="fig:",scaledwidth=19.0% ] + are shown.,title="fig:",scaledwidth=19.0% ] are shown.,title="fig:",scaledwidth=19.0% ] are shown.,title="fig:",scaledwidth=19.0% ] are shown.,title="fig:",scaledwidth=19.0% ] c|c 0.49 computed by the primal and the ultraweak formulations . a middle slice ( through singular vertex ) of adaptive iterates 6,7,8 and 9 are shown ., title="fig:",scaledwidth=45.0% ] computed by the primal and the ultraweak formulations . a middle slice ( through singular vertex ) of adaptive iterates 6,7,8 and 9 are shown ., title="fig:",scaledwidth=45.0% ] & 0.49 computed by the primal and the ultraweak formulations .a middle slice ( through singular vertex ) of adaptive iterates 6,7,8 and 9 are shown . ,title="fig:",scaledwidth=45.0% ] computed by the primal and the ultraweak formulations .a middle slice ( through singular vertex ) of adaptive iterates 6,7,8 and 9 are shown ., title="fig:",scaledwidth=45.0% ] + & + + & + 0.49 computed by the primal and the ultraweak formulations .a middle slice ( through singular vertex ) of adaptive iterates 6,7,8 and 9 are shown ., title="fig:",scaledwidth=45.0% ] computed by the primal and the ultraweak formulations . a middle slice ( through singular vertex ) of adaptive iterates 6,7,8 and 9 are shown ., title="fig:",scaledwidth=45.0% ] & 0.49 computed by the primal and the ultraweak formulations .a middle slice ( through singular vertex ) of adaptive iterates 6,7,8 and 9 are shown ., title="fig:",scaledwidth=45.0% ] computed by the primal and the ultraweak formulations . a middle slice ( through singular vertex ) of adaptive iterates 6,7,8 and 9are shown ., title="fig:",scaledwidth=45.0% ] it is illustrative to visualize the difference between the two different dpg formulations and the accompanying convergence in different norms .figure [ fig : comparison ] presents a side - by - side comparison of the real part of the electric field component obtained using the primal ( left ) and ultraweak ( right ) formulations . the same color scale ( min , max )is applied to both solutions in this figure ( whereas the scales of figures [ fig : microwave_primal ] and [ fig : microwave_ultraweak ] are not identical ) . obviously , the meshes are different , but they are of comparable size , so we believe the comparison is fair .the primal method , which delivers solution converging in the stronger -norm , `` grows '' the unknown solution slower , whereas the ultraweak formulation converging in the weaker -norm seems to capture the same solution features faster .both methods ultimately approximate the same solution but at different speeds and the ultraweak formulation seems to be a winner .recall that the number of interface unknowns for both formulations is identical , but the total number of unknowns for the ultraweak formulation is higher , i.e. , the ultraweak formulation requires a larger number of local ( element - by - element ) computations.///in addition to presenting the first analysis of dpg methods for maxwell equations , we have presented a technique that considerably simplifies analysis of various dpg methods . the idea is to inherit the stability of broken formulations using the known stability of unbroken standard formulations and is the content of theorem [ thm : hybrid ] . to obtain discrete stability for the maxwell discretization, a new fortin operator was constructed in theorem [ thm : fortin ] .we have shown how certain duality identities ( proved in theorem [ thm : duality ] ) can be used to verify a critical assumption ( assumption [ asm : hybrid ] ) involving an interface inf - sup condition . during this process , we have provided a simple technique to prove duality identities ( see and lemma [ lem : duality ] ) like ^*},\ ] ] ( where the norm on the right hand side is the norm in the space dual to ) . finally , the connection between the stability of weak and strong formulations was made precise in theorem [ thm : maxwellcycles ] : the wellposedness of one of the displayed six formulations imply the wellposedness of all others . notwithstandingthis result , the numerical experiments clearly showed the practical differences in convergences among the formulations . before concluding , we mention a few limitations of our analysis and open issues .convergence results explicit in the polynomial degree are not obtained by the currently known fortin operators . while construction of local fortin operators provides one way to prove discrete stability , other avenues to reach the same goal ( such as the analysis of assuming higher regularity , or the analysis of extending the strang lemma ) may prove important .our analysis did not track the dependence on the wavenumber .more complex techniques are likely to be needed for such parameter tracking , including in the unrelated important examples of advective singular perturbation problems . , _ a mixed finite element method for 2nd order elliptic problems _ , in mathematical aspects of finite element methods ( proc ., consiglio naz .delle ricerche ( c.n.r . ) , rome , 1975 ) , springer , berlin , 1977 , pp .lecture notes in math ., vol . 606 .
discontinuous petrov galerkin ( dpg ) methods are made easily implementable using `` broken '' test spaces , i.e. , spaces of functions with no continuity constraints across mesh element interfaces . broken spaces derivable from a standard exact sequence of first order ( unbroken ) sobolev spaces are of particular interest . a characterization of interface spaces that connect the broken spaces to their unbroken counterparts is provided . stability of certain formulations using the broken spaces can be derived from the stability of analogues that use unbroken spaces . this technique is used to provide a complete error analysis of dpg methods for maxwell equations with perfect electric boundary conditions . the technique also permits considerable simplifications of previous analyses of dpg methods for other equations . reliability and efficiency estimates for an error indicator also follow . finally , the equivalence of stability for various formulations of the same maxwell problem is proved , including the strong form , the ultraweak form , and various forms in between .
the dramatic increase of mobile data traffic in the recent years has posed imminent challenges to the current cellular systems , requiring higher throughput , larger coverage , and smaller communication delay .the 5 g cellular system on the roadmap is expected to achieve up to 1000 times of throughput improvement over today s 4 g standard . as a promising candidate for the future 5 g standard, cloud radio access network ( c - ran ) enables a centralized processing architecture , using multiple relay - like base stations ( bss ) , named remote radio heads ( rrhs ) , to serve mobile users cooperatively under the coordination of a central unit ( cu ) . for the practical deployment of c - ran ,a cluster - based c - ran system is shown in fig .[ fig1 ] , where the same frequency bands could be reused over non - adjacent or even adjacent c - ran clusters to increase spectral efficiency through coordination among cus by applying certain interference management techniques such as dynamic resource allocation . within each c- ran cluster , the rrhs are connected to a cu that is further connected to the core network via high - speed fiber fronthaul and backhaul links , respectively . in a c - ran , a mobile user could be associated with multiple rrhs . however , unlike the bss in conventional cellular systems which encode / decode user messages locally , the rrhs merely forward the signals to / from the mobile users , while leaving the joint encoding / decoding complexity to a baseband unit ( bbu ) in the cu .the use of inexpensive and densely deployed rrhs , along with the advanced joint processing mechanism , could significantly improve upon the current 4 g system with enhanced scalability , increased throughput and extended coverage .the distributed antenna system formed by the rrhs enables spectrum efficient spatial division multiple access ( sdma ) in c - ran , which has gained extensive research attentions . in the uplink communication of an sdmabased c - ran , all mobile users in the same cluster transmit on the same spectrum and at the same time , while the bbu performs multi - user detection ( mud ) to separate the user messages . in practice , however , the implementation of mud is hurdled by the high computational complexity and the difficulty in signal synchronization as well as perfect channel estimation .similarly , the downlink communication using sdma is also of high complexity in the encoding design to mitigate the co - channel interference . withthis regard , orthogonal frequency division multiple access ( ofdma ) is an alternative candidate for c - ran because of its efficient spectral usage and yet low encoding / decoding complexity . in ofdma - based c - ran systems , users are allocated with orthogonal subcarriers ( scs ) free of co - channel interference . in this case ,simple maximal - ratio combining ( mrc ) technique could be performed at the cu over the signals received from different rrhs to decode a user s message transmitted on its designated sc .moreover , ofdma is compatible with the current wireless communication systems such as 4 g lte .considering its potential implementations in future wireless systems and compatibility with the current 4 g standards , we consider ofdma for the cluster - based c - ran ( see fig . [ fig1 ] ) in this paper .the performance of a c - ran system is constrained by the fronthaul link capacity . with densely deployed rrhs ,the fronthaul traffic generated from a single user signal of mhz bandwidth could be easily scaled up to multiple gbps . in practice , a commercial fiber link with tens of gbps capacity could thus be easily overwhelmed even under moderate mobile traffic . to tackle this problem , many signal compression / quantization methodshave been proposed to optimize the system performance under fronthaul capacity constraints .specifically , the so - called `` quantize - and - forward '' scheme is widely adopted for the uplink communication in c - ran to reduce the communication rates between the bbu and rrhs , where each rrh samples , quantizes and forwards its received signals to the bbu over its fronthaul link .the quantize - and - forward scheme is initially studied in relay channel as an efficient way for the relay to deliver the received signal from the source to the destination . in the uplink communication of c - ran , which can be viewed as a special case of relay channel model with a wireless first - hop link and wired ( fiber ) second - hop link ,quantize - and - forward scheme is studied under an information - theoretical gaussian test channel model with the uncompressed signals as the input and compressed signals as the output corrupted by an additive gaussian compression noise .then , the quantization methods are designed through setting the quantization noise levels at different rrhs to maximize the end - to - end throughput subject to the capacity constraints of individual fronthaul links .specifically , the optimal quantization design needs to consider the signal correlation across the multiple rrhs , where methods based on distributed source coding , e.g. , wyner - ziv coding , are widely used to jointly optimize the noise levels at the rrhs ( see e.g. , ) .besides , quantization method based on distributed source coding is also studied in the downlink communication of c - ran in . despite of their respective contributions to the understanding of the theoretical limits of c - ran , most of the proposed quantization methods are based on information - theoretical models , e.g. , gaussian test channel and distributed source coding , which are practically hard to implement . on one hand , although the quantization noise levels across different rrhs that maximize the end - to - end throughput are found in under different system setups , it is still unknown how to practically design quantization codebook at each rrh to achieve the required quantization noise level for the gaussian test channel model . on the other hand ,the decompression complexity of distributed source coding grows exponentially with the number of sources ( e.g. , rrhs in the uplink communication ) . in practice, the complexity can be prohibitively high in a c - ran with a large number of cooperating rrhs . therefore, it still remains as a question about the practically achievable throughput of c - ran using practical quantization methods , such as uniform scalar or vector quantization used in common a / d modules , which are independently applied over rrhs .furthermore , most of the existing works ( e.g. , ) only study signal compression methods in c - ran under fixed wireless resource allocation .however , the end - to - end performance of c - ran is determined by both the wireless and fronthaul links . in an ofdma system ,transmit power allocation over frequency scs directly determines the spectral efficiency of wireless link .for an ofdma - based system without fronthaul constraint , the optimal power allocation problem is extensively studied , e.g. , it follows the celebrated water - filling policy for a single user case .however , the behavior of optimal sc power allocation in a fronthaul constrained system like c - ran is still unknown to the authors best knowledge . in this paper, we address the above problems in an ofdma - based c - ran .in particular , we consider using simple uniform scalar quantization instead of the information - theoretical quantization method based on gaussian test channel , and propose joint wireless power control and fronthaul rate allocation design to maximize the system throughput performance .our main contributions are summarized as follows : * in the uplink communication of an ofdma - based c - ran , we derive the end - to - end sum - rate of all the users subject to each rrh s fronthaul capacity constraint achieved by a simple uniform scalar quantization at each rrh together with independent compression among rrhs .different from prior works based on gaussian test channel model , this provides for the first time an achievable rate result for c - ran with a practically implementable quantization method . * with the derived rate under uniform scalar quantization , we formulate the optimization problem of joint wireless power control and fronthaul rate allocation to maximize the sum - rate performance in ofdma based c - ran .we also formulate the problem based on the gaussian test channel model to obtain performance benchmark .efficient algorithms are proposed to solve the formulated joint optimization problems based on the alternating optimization technique .* by investigating the single - user and single - rrh special case , we obtain important insights on the optimal wireless power control and fronthaul rate allocation over scs .for example , with a fixed fronthaul rate allocation , we show that the optimal power allocation over scs is a threshold based policy depending on the channel power of a sc , i.e. , no power is allocated to a sc if the channel power is below the threshold .interestingly , we find that the power allocation under fronthaul rate constraint in general does not follow a water - filling policy that always allocates more power to sc with higher channel power .the inconsistency is especially evident in low - fronthaul - rate region , where the sc with the highest channel power may receive the least transmit power , and vice versa .we also theoretically quantify the performance gap between the proposed simple uniform quantization scheme from the throughput upper ( cut - set ) bound . by simulationswe show that the throughput performance of the simple uniform quantization scheme is very close to the performance upper bound , and in fact overlaps with the upper bound when the fronthaul capacity is sufficiently large .the rest of this paper is organized as follows .we first introduce in sections [ sec : system model ] and [ sec : two scalar quantization models ] the system model of c - ran and the quantization techniques used in the fronthaul signal processing , respectively . in section [ sec : problem formulation ] , we formulate the end - to - end sum - rate maximization problems for both the gaussian test channel and uniform scalar quantization models .sections [ sec : special case : single user and single rrh ] and [ sec : general case : multiple users and multiple rrhs ] solve the formulated problems for the special case of single - user and single - rrh and general case of multi - user and multi - rrh , respectively . finally , we conclude the paper and point out some directions for future work in section [ sec : conclusion ] .we consider the uplink of a clustered c - ran . as shown in fig . [ fig1 ] , each cluster consists of one bbu , single - antenna rrhs , denoted by the set , and single - antenna users , denoted by the set .it is assumed that each rrh , , is connected to the bbu through a noiseless wired fronthaul link of capacity bps . in the uplink ,each rrh receives user signals over the wireless link and forwards to the bbu via its fronthaul link .then , the bbu jointly decodes the users messages based on the signals from all the rrhs within the cluster and forwards the decoded information to the core network through a backhaul link .the detailed signal models in the wireless and the fronthaul links are introduced in the following . in this paper, we consider ofdma - based uplink information transmission between the users and the rrhs over a wireless link of a total bandwidth equally divided into scs .the sc set is denoted by .it is assumed that each sc is only allocated to one user .denote as the set of scs allocated to user , . in practice, dynamic sc allocation could be used to enhance the spectral efficiency by assigning scs to users of favorable wireless link conditions , e.g. , allocating a sc to the user with the highest signal - to - interference - plus - noise ratio ( sinr ) . however , as an initial attempt to understand the joint design of the wireless resource allocation and fronthaul rate allocation in fronthaul constrained c - ran , it is assumed for simplicity in this paper that the sc allocations among users , i.e. , s , are pre - determined .the interesting case with dynamic sc allocation is left for future study .specifically , in the uplink each user , , first generates an ofdma modulated signal over its assigned scs and then transmits to the rrhs in the same cluster . as shown in fig .[ fig2 ] , each rrh , , first downconverts the received rf signals to the baseband , then transforms the serial baseband signals to the parallel ones , and demodulates the parallel signals into streams by performing fast fourier transform ( fft ) .suppose that , then the equivalent baseband complex symbol received by rrh at sc can be expressed as denotes the transmit symbol of user at sc ( which is modelled as a circularly symmetric complex gaussian random variable with zero - mean and unit - variance ) , denotes the transmit power of user at sc , denotes the channel from user to rrh at sc , and denotes the aggregation of additive white gaussian noise ( awgn ) and ( possible ) out - of - cluster interference at rrh at sc .it is assumed that s are independent over and . to forward the baseband symbols to the bbu via the fronthaul links , the so - called `` quantize - and - forward '' scheme is applied , where each rrh first quantizes its baseband received signal and then sends the corresponding digital codewords to the bbu .specifically , since at each rrh the received symbols at all the scs are independent with each other and we assume independent signal quantization at different rrhs , a simple scalar quantization on s is optimal as shown in fig .. the baseband quantized symbol of is then given by denotes the quantization error for the received symbol with zero mean and variance .note that s are independent over due to scalar quantization at each sc , and over due to independent compression among rrhs .then , each rrh transforms the parallel encoded bits s into the serial ones and sends them to the bbu via its fronthaul link for joint information decoding .after collecting the digital codewords , the bbu first recovers the baseband quantized symbols s based on the quantization codebooks used by each rrh .then , to decode , the bbu applies a linear combining on the quantized symbols at sc collected from all rrhs : ^t ] , ^t ] . according to ( [ eqn : beamforming ] ), the snr for decoding is expressed as denotes a diagonal matrix with the main diagonal given by vector .it can be shown that the optimal combining weights that maximize s are obtained from the well - known mrc : the above mrc receiver , given in ( [ eqn : sinr ] ) reduces to key issue to implement the quantize - and - forward scheme introduced in section [ sec : system model ] is how each rrh should quantize its received signal at each sc in practice . in this section ,we first study a theoretical quantization model by viewing ( [ eqn : quantized signal ] ) as a test channel and derive its achievable sum - rate based on the rate - distortion theory , which can serve as a performance upper bound .then , we investigate the practical uniform scalar quantization scheme in details , which can be easily applied at each rrh , and derive the corresponding achievable end - to - end sum - rate . in this subsection, we assume that the quantization errors given in ( [ eqn : quantized signal ] ) are gaussian distributed , i.e. , , . with gaussian quantization errors , ( [ eqn : quantized signal ] )can be viewed as a gaussian test channel . as a result , to forward the received data at sc , the transmission rate in rrh s fronthaul linkis expressed as quantization is performed at each rrh independently , can be reliably transmitted to the bbu if and only if next , consider the end - to - end performance of the users . with gaussian noise in ( [ eqn : quantized signal ] ) ,the achievable rate of user at sc is expressed as is obtained by substituting by according to ( [ eqn : fronthaul link sc ] ) . notice that as the allocated fronthaul rate ( versus ) , the achievable end - to - end rate in ( [ eqn : test channel user rate ] ) converges to zero ( or that of the wireless link capacity ) .then , the achievable throughput of all users is expressed as ( [ eqn : test channel rate ] ) , it is clearly seen that the sum - rate performance depends on both the users power allocations , , and the rrhs fronthaul rate allocations , , over the scs . in practice , it is very difficult to find the quantization codebooks to achieve the throughput given in ( [ eqn : test channel rate ] ) subject to the fronthaul capacity constraints given in ( [ eqn : fronthaul link ] ) . in this subsection, we consider using practical uniform scalar quantization technique at each rrh and derive the achievable sum - rate .a typical method to implement the uniform quantization is via separate in - phase / quadrature ( i / q ) quantization , where the architecture is shown in fig .specifically , the received complex symbol given in ( [ eqn : received signal ] ) could be presented by its i and q parts : , and the i - branch symbol and q - branch symbol are both real gaussian random variables with zero mean and variance . as a result ,each rrh first normalizes its i - branch and q - branch symbols at sc to and by factors and , and then implements uniform scalar quantization to and with quantization bits , separately .for conciseness , we summarize the implementation details of the uniform scalar quantization in appendix [ appendix1 ] . in the following ,we present the end - to - end achievable throughput of all users subject to the fronthaul capacity constraints under the uniform scalar quantization technique described in appendix [ appendix1 ] .[ fronthaul ] with the uniform scalar quantization scheme , the transmission rate from rrh to the bbu in its fronthaul link is given as denotes the transmission rate in rrh s fronthaul link to forward its received data at sc , i.e. , please refer to appendix [ appendix2 ]. [ rate ] with the uniform scalar quantization scheme , an achievable end - to - end throughput of all users is expressed as the achievable rate of user at sc is expressed as please refer to appendix [ appendix3 ] .notice that ( [ eqn : new end to end rate ] ) holds when ( i.e. , ) according to ( [ eqn : fronthaul link uniform quantization sc ] ) .similar to ( [ eqn : test channel rate ] ) for the ideal case of gaussian compression , the sum - rate in ( [ eqn : uniform quantization sum - rate ] ) with the uniform scalar quantization also jointly depends on both the users power allocations , , and the rrhs fronthaul rate allocations , , over the scs .furthermore , given the same set of power and fronthaul rate allocations , the achievable rate in ( [ eqn : uniform quantization sum - rate ] ) is always strictly less than that in ( [ eqn : test channel rate ] ) provided that , .in this paper , given the wireless bandwidth , each user s sc allocation s as well as transmit power constraint s , and each rrh s fronthaul link capacity s , we aim to maximize the end - to - end throughput of all the users subject to each rrh s fronthaul link capacity constraint by jointly optimizing the wireless power control and fronthaul rate allocation .specifically , for the benchmark scheme , i.e. , the theoretical gaussian test channel based scheme in section [ sec : gaussian test channel model ] , we are interested in solving the following problem . is given in ( [ eqn : test channel rate ] ) and is given in ( [ eqn : fronthaul link sc ] ) .furthermore , for the proposed uniform scalar quantization based scheme in section [ sec : uniform quantization model ] , we are interested in solving the following problem . is given in ( [ eqn : uniform quantization sum - rate ] ) and is given in ( [ eqn : fronthaul link uniform quantization sc ] ) .recall that with the same rate allocations in the fronthaul links for the two schemes , i.e. , , , in ( [ eqn : test channel rate ] ) is always larger than given in ( [ eqn : uniform quantization sum - rate ] ) . furthermore , uniform scalar quantization requires that the fronthaul rate allocated at each sc must be an integer multiplication of . due to the above two reasons , in general the optimal value of problem ( p2 ) is smaller than that of problem ( p1 ) , i.e. , .it is also worth noting that user association is also determined from solving problems ( p1 ) and ( p2 ) , since if with the obtained solution we have , , rrh will not quantize and forward user s signal to the bbu for decoding , or equivalently rrh does not serve that user at all .it can be also observed that both problems ( p1 ) and ( p2 ) are non - convex since their objective functions are not concave over s and s ; thus , it is difficult to obtain their optimal solutions in general . in the following two sections , we first study the special case of problems ( p1 ) and ( p2 ) with one user and one rrh to shed some light on the mutual influence between the wireless power allocation and fronthaul rate allocation , and then propose efficient algorithms to solve problems ( p1 ) and ( p2 ) for the general case of multiple users and multiple rrhs .in this section , we study problems ( p1 ) and ( p2 ) for the special case of and . for convenience , in the rest of this section we omit the subscripts of and in all the notations in problems ( p1 ) and ( p2 ) .it can be shown that problem ( p1 ) is still a non - convex problem for the case of and . in this subsection , we propose to apply the alternating optimization technique to solve this problem .specifically , first we fix the fronthaul rate allocation s in problem ( p1 ) and optimize the wireless power allocation by solving the following problem . denote the optimal solution to problem ( [ eqn : p2 ] ) .next , we fix the wireless power allocation s in problem ( p1 ) and optimize the fronthaul rate allocation by solving the following problem . denote the optimal solution to problem ( [ eqn : p3 ] ) .the above update of and is iterated until convergence . in the following ,we show how to solve problems ( [ eqn : p2 ] ) and ( [ eqn : p3 ] ) , respectively . first , it can be shown that the objective function of problem ( [ eqn : p2 ] ) is concave over s . as a result ,problem ( [ eqn : p2 ] ) is a convex problem , and thus can be efficiently solved by the lagrangian duality method .we then have the following proposition .[ proposition1 ] the optimal solution to problem ( [ eqn : p2 ] ) is expressed as is a constant under which . please refer to appendix [ appendix4 ] .it can be shown that as s go to infinity , i.e. , the case without fronthaul link constraint in problem ( p1 ) , the optimal power allocation given in ( [ eqn : opt1 ] ) reduces to is consistent with the conventional water - filling based power allocation . in the following ,we discuss about the impact of fronthaul rate allocation on the optimal power allocation given in ( [ eqn : opt1 ] ) with finite values of s .it can be observed from ( [ eqn : opt1 ] ) that the optimal wireless power allocation with given s is threshold - based . in the following ,we give a numerical example to investigate the monotonicity of the threshold over , ( note that in ( [ eqn : fn ] ) is also a function of s ) . in this example , the bandwidth of the wireless link is assumed to be , which is equally divided into scs .the channel powers are given as , , , .moreover , the power spectral density of the background noise is assumed to be / hz , and the noise figure due to receiver processing is .the transmit power of the user is .it is further assumed that the fronthaul rates are equally allocated among scs , i.e. , , , and thus s are of the same value . fig .[ fig10 ] ( a ) shows the plot of versus by increasing the value of in problem ( [ eqn : p2 ] ) .it is observed in this particular setup ( and many others used in our simulations for which the results are not shown here due to the space limitation ) that in general is increasing with .this implies that as increases , more scs with weaker channel powers tend to be shut down .the reason is as follows .the dynamic range of the received signal at the sc with stronger channel power is larger , and thus with equal s , the corresponding quantization noise level is also larger .when s are small , quantization noise dominates the end - to - end rate performance and thus the relatively small quantization noise level at the sc with weaker channel power may offset the loss due to the poor channel condition .however , as increases , the quantization noise becomes smaller , until the wireless link dominates the end - to - end performance . in this case, we should shut down some scs with poor channel conditions just as water - filling based power allocation given in ( [ eqn : water filling ] ) . to verify the above analysis , fig .[ fig10 ] ( b ) shows the optimal power allocation among the scs versus different values of in the above numerical example .it is observed that when is small , in general the scs with poorer channel conditions are allocated higher transmit power since the quantization noise levels are small at these scs . as increases , the scs with poorer channels are allocated less and less transmit power .specially , when or , sc 4 with the poorest channel condition is shut down for transmission . it is also observed that when is sufficiently large such that the quantization noise is negligible , the power allocation converges to the water - filling based solution given in ( [ eqn : water filling ] ) .next , similar to problem ( [ eqn : p2 ] ) , it can be shown that problem ( [ eqn : p3 ] ) is a convex problem and thus can be efficiently solved by the lagrangian duality method .we then have the following proposition .[ proposition2 ] the optimal solution to problem ( [ eqn : p3 ] ) can be expressed as is a constant under which . please refer to appendix [ appendix5 ] .similar to the optimal power allocation given in ( [ eqn : opt1 ] ) , it can be inferred from proposition [ proposition2 ] that the optimal fronthaul rate allocation with given s is also threshold - based .if the received signal snr , , at sc is below the threshold , the rrh should not quantize and forward the signal at this sc to the bbu for decoding . on the other hand ,if , more quantization bits should be allocated to the scs with higher values of s . after problems ( [ eqn : p2 ] ) and ( [ eqn : p3 ] ) are solved by propositions [ proposition1 ] and [ proposition2 ] , we are ready to propose the overall algorithm to solve problem ( p1 ) , which is summarized in table [ table1 ] .it can be shown that a monotonic convergence can be guaranteed for algorithm [ table1 ] since the objective value of problem ( p1 ) is increased after each iteration and it is practically bounded . ' '' '' * initialize : set , , , and ; * repeat * * ; * * update by solving problem ( [ eqn : p2 ] ) with , , according to proposition [ proposition1 ] ; * * update by solving problem ( [ eqn : p3 ] ) with , , according to proposition [ proposition2 ] ; * until , where denotes the objective value of problem ( p1 ) achieved by and , and is a small value to control the accuracy of the algorithm . ' '' '' [ table1 ] with the proposed algorithm i to solve ( p1 ) , we provide a numerical example to analyze the properties of the resulting wireless power and fronthaul rate allocation among scs .the setup of this example is the same as that for fig .[ fig10 ] , while the fronthaul link capacity is assumed to be .[ fig12 ] ( a ) and fig .[ fig12 ] ( b ) show the wireless power allocation and the fronthaul rate allocation at each sc , respectively , obtained via algorithm [ table1 ] . for comparison , in fig .[ fig12 ] ( a ) we also provide the power allocation at each sc obtained by solving problem ( [ eqn : p2 ] ) with equal fronthaul rate allocation , as well as the water - filling based power allocation at each sc ( obtained without considering fronthaul link constraint ) , and in fig .[ fig12 ] ( b ) the equal fronthaul rate allocation as well as the fronthaul rate allocation obtained by solving problem ( [ eqn : p3 ] ) with water - filling based power allocation .it is observed in fig .[ fig12 ] ( a ) that algorithm [ table1 ] results in a more greedy power allocation solution among scs than the water - filling based method : besides sc , sc with the second poorest channel condition is also forced to shut down , and the saved power and quantization bits are allocated to scs and with better channel conditions .this is in sharp contrast to the case of equal fronthaul rate allocation for which sc is allocated the highest transmit power and even sc with the poorest channel condition is still used for transmission .moreover , in fig .[ fig12 ] ( b ) , the fronthaul rate allocations at scs obtained by algorithm [ table1 ] are , , , and , respectively . as a result , different from equal fronthaul rate allocation ,algorithm [ table1 ] tends to allocate more quantization bits to the scs with strong channel power to explore their good channel conditions , while allocating less ( or even no ) quantization bits to the scs with weaker power .a similar fronthaul rate allocation is observed for the water - filling power allocation case . in this subsection, we study problem ( p2 ) in the case of and to evaluate the efficiency of the uniform quantization based scheme . we first solve problem ( p2 ) in this case by extending the results in section [ sec : power control and fronthaul rate allocation for the case of one user and one rrh ] .it can be observed that without the last set of constraints involving integer s , problem ( p2 ) is very similar to problem ( p1 ) . as a result ,in the following we propose a two - stage algorithm to solve problem ( p2 ) .first , we ignore the integer constraints in problem ( p2 ) , which is denoted by problem ( p2-noint ) , and apply an alternating optimization based algorithm similar to algorithm [ table1 ] to solve it ( the details of which are omitted here for brevity ) .let denote the converged wireless power and fronthaul rate allocation solution to problem ( p2-noint ) .next , we fix s and find a feasible solution of s based on such that s are integers , , in problem ( p2 ) .this is achieved by rounding each to its nearby integer as follows : , and denotes the maximum integer that is no larger than .note that we can always find a feasible solution of s by simply setting in ( [ eqn : feasible ] ) since in this case we have . in the following ,we show how to find a better feasible solution by optimizing .it can be observed from ( [ eqn : feasible ] ) that with decreasing , the values of s will be non - decreasing , . as a result, the objective value of problem ( p2 ) will be non - decreasing , but the fronthaul link constraint in problem ( p2 ) will be more difficult to satisfy .thereby , we propose to apply a simple bisection method to find the optimal value of , denoted by , which is summarized in table [ table4 ] .after is obtained , the feasible solution of s can be efficiently obtained by taking into ( [ eqn : feasible ] ) .notice that by ( [ eqn : feasible ] ) the number of quantization bits per sc , , is now allowed to be zero , instead of being a strictly positive integer as assumed in sections [ sec : two scalar quantization models ] and [ sec : problem formulation ] . andhence , for any sc , the achievable end - to - end rate for the uniform scalar quantization given in ( [ eqn : new end to end rate ] ) no longer holds , which instead should be set to zero intuitively . ] ' '' '' * initialize , ; * repeat * * set ; * * take into ( [ eqn : feasible ] ) . if s , , satisfy the fronthaul link capacity constraint in problem ( p2 ) , set ; otherwise , set ; * until , where is a small value to control the accuracy of the algorithm ; * take into ( [ eqn : feasible ] ) to obtain the feasible solution of s , . ' '' '' [ table4 ] next , we evaluate the end - to - end rate performance of the uniform scalar quantization based scheme in the case of and . note that a cut - set based capacity upper bound of our studied c - ran is is the water - filling based optimal power solution given in ( [ eqn : water filling ] ) .[ proposition3 ] in the case of and , let denote the optimal value of problem ( p1 ) with an additional set of constraints of we have . please refer to appendix [ appendix6 ] .proposition [ proposition3 ] implies that with the simple solution with s denoting the optimal solution to problem ( [ eqn : fix quantization in p1 ] ) given in appendix [ appendix6 ] , the gaussian test channel based scheme can achieve a capacity to within / hz . next , for the uniform scalar quantization , by setting the quantization noise level given in ( [ eqn : quantization noise power ] ) as , , in problem ( p2-noint ) , we have the following proposition .[ proposition4 ] in the case of and , is a feasible solution to problem ( p2-noint ) .let denote the objective value of problem ( p2-noint ) achieved by the above solution , we then have .please refer to appendix [ appendix7 ] .it can be inferred from propositions [ proposition3 ] and [ proposition4 ] that . as a result , we have the following corollary . [ corollary1 ] without the constraints that the number of quantization bits per sc is an integer , with the simple solution , the uniform scalar quantization based scheme at least achieves a capacity to within / hz in the case of and .corollary [ corollary1 ] gives a worst - case performance gap of the proposed uniform quantization based scheme to the cut - set upper bound in ( [ eqn : capacity upper bound ] ) if we ignore the constraints that each quantization level is represented by an integer number of bits .however , it is difficult to analyze the performance loss due to these integer constraints . in the following subsection, we will provide a numerical example to show the impact of the integer constraints on the end - to - end rate performance . in this subsection, we provide a numerical example to verify our results for the case of and .the setup of this example is summarized as follows .the channel bandwidth is assumed to be , which is equally divided into scs .the user s transmit power is .it is assumed that the distance between the user and the rrh is m .the pass loss model is db .moreover , it is assumed that the power spectral density of the awgn at the rrh is / hz , and the noise figure is .first , we evaluate the performance of the proposed uniform scalar quantization based scheme against that of the gaussian test channel based scheme as well as the capacity upper bound given in ( [ eqn : capacity upper bound ] ) .[ fig9 ] shows the end - to - end rate achieved by various schemes versus the fronthaul link capacity .note that with the algorithm proposed for problem ( p2-noint ) in section [ sec : uniform scalar quantization ] , we use as the initial point such that the worst - case performance gap shown in corollary [ corollary1 ] can be guaranteed .it is observed from fig .[ fig9 ] that for various values of , uniform scalar quantization based scheme without the integer constraints in problem ( p2 ) does achieve a capacity within / hz to .moreover , it is observed that with algorithm [ table4 ] , the performance loss due to the integer constraints is negligible .however , if we simply set in ( [ eqn : feasible ] ) to find feasible s , there will be a considerable rate loss . as a result ,our proposed algorithm [ table4 ] is practically useful for setting such that uniform scalar quantization based scheme can perform very close to the capacity upper bound .last , it is observed that the performance gap of all the schemes to the upper bound vanishes as the fronthaul link capacity increases .this is because if is sufficiently large at the rrh , each symbol can be quantized by a large number of bits such that the specific quantization method does not affect the quantization noise significantly . to further illustrate the gain from joint optimization of wireless power and fronthaul rate allocation , in the following we introduce some benchmark schemes where either wireless power or fronthaul rate allocation is optimized , but not both . ** benchmark scheme 1 : equal power allocation . * in this scheme , the user allocates its transmit power equally to each sc , i.e. , , . then ,with the given equal power allocation , we optimize the fronthaul rate allocation at the rrh to maximize the end - to - end rate .* * benchmark scheme 2 : water - filling power allocation . * in this scheme , the user ignores the fronthaul link constraints and allocates its transmit power based on water - filling solution as shown in ( [ eqn : water filling ] ) .then , with the given water - filling based power allocation , we optimize the fronthaul rate allocation at the rrh to maximize the end - to - end rate . * * benchmark scheme 3 : equal fronthaul rate allocation . * in this scheme , the rrh equally allocates its fronthaul link capacity among scs , .then , with the given equal fronthaul rate allocation , we optimize the transmit power of the user to maximize the end - to - end rate .* * benchmark scheme 4 : equal power and fronthaul rate allocation . * in this scheme , the user allocates its transmit power equally to each sc , and the rrh equally allocates its fronthaul link bandwidth among scs .[ fig4 ] shows the performance comparison among various proposed solutions for the uniform scalar quantization based scheme .it is observed that compared with benchmark schemes 1 - 4 where only either wireless power or fronthaul rate allocation is optimized , our joint optimization solution proposed in section [ sec : uniform scalar quantization ] achieves a much higher end - to - end rate , especially when the fronthaul link capacity is small , e.g. , gbps .furthermore , it is observed from benchmark schemes 1 and 3 that when is small , fronthaul rate optimization plays the dominant role in improving the end - to - end rate performance , while when is large , most of the optimization gain comes from the wireless power allocation .furthermore , when is sufficiently large , the performance of benchmark schemes 2 and 3 , for which wireless power allocation is optimized , even converges to the joint optimization solution proposed in section [ sec : uniform scalar quantization ] .in this section , we consider the joint wireless power allocation and fronthaul rate allocation in the general c - ran with multiple users and multiple rrhs , i.e. , and . in this subsection , we solve problem ( p1 ) .it is worth noting that different from section [ sec : power control and fronthaul rate allocation for the case of one user and one rrh ] , in the case of multiple rrhs , the throughput given in ( [ eqn : test channel rate ] ) is not concave over s with given s due to the summation over in ( [ eqn : optimal sinr ] ) . as a result , the alternating optimization based solution proposed in section [ sec : power control and fronthaul rate allocation for the case of one user and one rrh ] can not be directly extended to the general case of and . to deal with the above difficulty, we change the design variables in problem ( p1 ) .define , by changing the design variables of problem ( p1 ) from to , problem ( p1 ) is transformed into the following problem . ( [ eqn : p4 ] ) is still a non - convex problem . in the following ,we propose to apply the techniques of alternating optimization as well as convex approximation to solve it .first , by fixing s , we optimize the transmit power allocation s by solving the following problem . s denote the optimal solution to problem ( [ eqn : p5 ] ) .then , by fixing s , we optimize the fronthaul rate allocation by solving the following problem . s denote the optimal solution to problem ( [ eqn : p6 ] ) .then , the above update of s and s is iterated until convergence . in the following ,we provide how to solve problems ( [ eqn : p5 ] ) and ( [ eqn : p6 ] ) , respectively . first , we consider problem ( [ eqn : p5 ] ) .we have the following lemma .[ lemma1 ] the objective function of problem ( [ eqn : p5 ] ) is a concave function over . please refer to appendix [ appendix8 ] .according to lemma [ lemma1 ] , problem ( [ eqn : p5 ] ) is a convex optimization problem . as a result, its optimal solution can be efficiently obtained via the interior - point method .next , we consider problem ( [ eqn : p6 ] ) .similar to lemma [ lemma1 ] , it can be shown that the objective function of problem ( [ eqn : p6 ] ) is a concave function over s .however , the fronthaul link capacity constraints in problem ( [ eqn : p6 ] ) are not convex . in the following ,we apply the convex approximation technique to convexify the fronthaul link capacity constraints . specifically , since according to ( [ eqn : new fronthaul rate sc ] ) is concave over s , its first - order approximation serves as an upper bound to it , i.e. , that the above inequality holds given any s . as a result, we solve the following problem via a relaxation of problem ( [ eqn : p6 ] ) . ( [ eqn : p7 ] ) is a convex problem , and thus its optimal solution , denoted by s , can be efficiently obtained via the interior - point method .then we have the following lemma . [ lemma2 ]suppose that s is a feasible solution to problem ( [ eqn : p6 ] ) , i.e. , , .then , s is a feasible solution to problem ( [ eqn : p6 ] ) and achieves an objective value no smaller than that achieved by the solution s. please refer to appendix [ appendix9 ] . since the optimal solution to problem ( [ eqn : p6 ] ) , i.e. , s , is difficult to obtain , in the following we use as the solution to ( [ eqn : p6 ] ) according to lemma [ lemma2 ] , i.e. , , .after problems ( [ eqn : p5 ] ) and ( [ eqn : p6 ] ) are solved , we are ready to propose the overall iterative algorithm to solve problem ( [ eqn : p4 ] ) , which is summarized in table [ table3 ] .note that in step 2.c . , we set s in problem ( [ eqn : p7 ] ) . according to lemma [ lemma2 ], s will achieve a sum - rate that is no smaller than that achieved by s . to summarize, a monotonic convergence can be guaranteed for algorithm iii since the objective value of problem ( [ eqn : p4 ] ) is increased after each iteration and it is upper - bounded by a finite value . ' '' '' * initialize : set , , , and ; * repeat * * ; * * update by solving problem ( [ eqn : p5 ] ) with , , via interior - point method ; * * update by solving problem ( [ eqn : p7 ] ) with and , , via interior - point method ; * until , where denotes the objective value of problem ( [ eqn : p4 ] ) achieved by the solution , and is a small value to control the accuracy of the algorithm . ' '' '' [ table3 ] in this subsection , we propose an efficient algorithm to solve problem ( p2 ) by jointly optimizing the wireless power allocation as well as the fronthaul rate allocation . to be consistent with the solution to problem ( p1 ) proposed in section [ sec : power control and fronthaul rate allocation for the case of multiple users and multiple rrhs ] , we define , by changing the design variables from into , problem ( p2 ) is transformed into the following problem . it can be observed that if we ignore the last set of constraints involving integers s , then problem ( [ eqn : p8 ] ) is very similar to problem ( [ eqn : p4 ] ) . as a result, we propose a two - stage algorithm to solve problem ( [ eqn : p8 ] ) .first , we ignore the last constraints in problem ( [ eqn : p8 ] ) and apply an alternating optimization based algorithm similar to algorithm iii to solve it ( the details of which are omitted here for brevity ) .let denote the obtained solution .then we fix s and find a feasible solution s based on s such that s are integers . for any given ,this is done by rounding s , , to their nearby integers as follows : , .similar to algorithm [ table4 ] for the special case of and , the optimal value of can be efficiently obtained via a simple bisection method , and thus a feasible solution of s , , is obtained according to ( [ eqn : feasible1 ] ) .last , by searching from to , the overall feasible solution is obtained . in this subsection, we provide a numerical example to evaluate the sum - rate performance of the proposed uniform scalar quantization based scheme in a single c - ran cluster with rrhs and users randomly distributed in a circular area of radius m .it is assumed that the bandwidth of the wireless link is equally divided into scs , and each user is pre - allocated scs .it is further assumed that the capacities of all the fronthaul links are identical , i.e. , , .the other setup parameters are the same as those used in section [ sec : numerical example ] .similar to the single - user single - rrh case in section [ sec : numerical example ] , we provide various benchmark schemes . note that benchmark schemes 1 - 4 introduced in section [ sec : numerical example ] can be simply extended to the general case of and .furthermore , to compare the sum - rate performance between our studied ofdma - based c - ran and conventional ofdma - based cellular networks , we also consider the following benchmark scheme . * * benchmark scheme 5 : conventional ofdma . * in this scheme , we assume that each rrh operates like conventional bs in cellular networks which directly decodes the messages of its served users , rather than forwarding its received signals to the bbu for a joint decoding .for simplicity , we assume that each user is served by its nearest rrh .then , the optimal power solution for each user among its assigned scs is the standard `` water - filling '' solution given in ( [ eqn : water filling ] ) .[ fig14 ] shows the end - to - end sum - rate performance versus the common fronthaul link capacity , , achieved by uniform quantization , gaussian test channel , as well as benchmark schemes 1 - 5 ( note that in benchmark scheme 5 , since each rrh decodes the messages locally , we assume that the sum - rate is a constant regardless of fronthaul capacities ) . it is observed that with our proposed algorithm in section [ sec : joint optimization of wireless power allocation and fronthaul rate allocation ] , the sum - rate achieved by the uniform scalar quantization based scheme is very close to that achieved by the gaussian test channel based scheme for various fronthaul capacities .furthermore , this performance gap vanishes as the fronthaul link capacities increase at all rrhs .it is also observed that compared with benchmark schemes 1 - 4 where only either wireless power or fronthaul rate allocation is optimized , our joint optimization solution proposed in section [ sec : joint optimization of wireless power allocation and fronthaul rate allocation ] achieves a much higher sum - rate , especially when the fronthaul link capacities are not sufficiently high . by comparing with fig .[ fig4 ] , it is observed that the joint optimization gain is more significant over the case of single user and single rrh .last , it is observed that with joint optimization of wireless and fronthaul resource allocation , the sum - rate achieved by proposed ofdma - based c - ran is much higher than that achieved by benchmark scheme 5 , i.e. , conventional ofdma , under the moderate capacity of current commercial fronthaul such as several gbps .in this paper , we have proposed joint wireless power control and fronthaul rate allocation optimization to maximize the throughput performance of an ofdma - based broadband c - ran system . in particular ,we have considered using practical uniform scalar quantization instead of the information - theoretical quantization method in the system design .efficient algorithms have been proposed to solve the joint optimization problems .our results showed that the joint design achieves significant performance gain compared to optimizing either wireless power control or fronthaul rate allocation . besides , we showed that the throughput performance of the proposed simple uniform scalar quantization is very close to the performance upper ( cut - set ) bound .this has verified that high throughput performance could be practically achieved with c - ran using simple fronthaul signal quantization methods .there are also many interesting topics to be studied in the area of fronthaul - constrained ofdma - based c - ran system .for instance , the impact of imperfect fronthaul link with packet loss of quantized data ; dynamic sc allocation among mobile users ; multiple users coexisting on one sc to further improve the spectral efficiency ; distributed quantization among rrhs to exploit the signal correlations ; and joint wireless resource and fronthaul rate allocations in the downlink , etc . in this appendix, we provide the details on the implementation of uniform scalar quantization introduced in section [ sec : uniform quantization model ] .first , each rrh normalizes the i - branch and q - branch symbols at each sc into the interval ] .we assume that rrh uses bits to quantize the symbol received on sc , resulting quantization levels , for which the quantization step size is given by , for each normalized symbol or , its quantized value is given by denotes the minimum integer that is no smaller than .then , s and s are encoded into digital codewords s and s and transmitted to the bbu . note that the i , q symbols , i.e. , s and s , are obtained by sampling of the i , q waveforms , the bandwidth of which is , . as a result , at each rrh , the nyquist sampling rate for the i , q waveforms at each sc is samples per second .furthermore , since at rrh , each sample at sc is represented by bits , the corresponding transmission rate in the fronthaul link is expressed as , the overall transmission rate from rrh to the bbu in the fronthaul link is given as , which should not exceed the fronthaul link capacity , .proposition [ fronthaul ] is thus proved . to derive the end - to - end sum - rate , we need to calculate the power of the quantization error given in ( [ eqn : quantized signal ] ) , i.e. , , .note that in ( [ eqn : quantized signal ] ) we have , . according to widrow theorem ,if the number of quantization levels ( i.e. , ) is large , and the signal varies by at least some quantization levels from sample to sample , the quantization noise can be assumed to be uniformly distributed . as a result , we assume that the quantization errors for both i , q signals , which are denoted by and with , are uniformly distributed in $ ] , . then we have is obtained by substituting by according to ( [ eqn : fronthaul link uniform quantization sc ] ) . then according to ( [ eqn : optimal sinr ] ) , a lower bound for the achievable rate of user at sc , by viewing given in ( [ eqn : beamforming ] ) as the worst - case gaussian noise ( it is worth noting that the equivalent quantization error given in ( [ eqn : beamforming ] ) , i.e. , , is the summation of independent uniform distributed random variables s . according to the central limit theory, tends to be gaussian distributed when is large ) , can be expressed as end - to - end throughput of all users is thus expressed as .proposition [ rate ] is thus proved .the lagrangian of problem ( [ eqn : p2 ] ) is expressed as is the dual variable associated with the transmit power constraint in problem ( [ eqn : p2 ] ) .then , the lagrangian dual function of problem ( [ eqn : p2 ] ) is expressed as maximization problem ( [ eqn : dual function 1 ] ) can be decoupled into parallel subproblems all having the same structure and each for one sc .for one particular sc , the associated subproblem is expressed as can be shown that is concave over , .the derivative of over is expressed as setting , we have s and s are given in ( [ eqn : alpha ] ) and ( [ eqn : eta ] ) , respectively . if , then there exists a unique positive solution to the quadratic equation ( [ eqn : equation ] ) , denoted by . in this case, is an increasing function over in the interval , and decreasing function in the interval . as a result, is maximized when .otherwise , if , there is no positive solution to the quadratic equation ( [ eqn : equation ] ) , and thus is a decreasing function over in the interval . in this case , is maximized when .after problem ( [ eqn : dual function 1 ] ) is solved given any , in the following we explain how to find the optimal dual solution for .it can be shown that the objective function in problem ( [ eqn : p2 ] ) is an increasing function over , and thus the transmit power constraint must be tight in problem ( [ eqn : p2 ] ) . as a result , the optimal can be efficiently obtained by a simple bisection method such that the transmit power constraint is tight in problem ( [ eqn : p2 ] ) .proposition [ proposition1 ] is thus proved .let denote the dual variable associated with the fronthaul link capacity constraint in problem ( [ eqn : p3 ] ) .similar to appendix [ appendix1 ] , it can be shown that problem ( [ eqn : p3 ] ) can be decoupled into the subproblems with each one formulated as derivative of over is expressed as , then , i.e. , is a decreasing function over , . in this case, we have , , which can not be the optimal solution to problem ( [ eqn : p3 ] ) . as a result, the optimal dual solution must satisfy . in this case, it can be shown that is an increasing function over when , and decreasing function otherwise . as a result, is maximized at .after problem ( [ eqn : subproblem 2 ] ) is solved given any , the optimal that is the dual solution to problem ( [ eqn : p3 ] ) can be efficiently obtained by a simple bisection method over such that the fronthaul link capacity constraint is tight in problem ( [ eqn : p3 ] ) .proposition [ proposition2 ] is thus proved . with constraints given in ( [ eqn : constraint1 ] ) , given in ( [ eqn : test channel rate ] ) reduces to , it can be shown from ( [ eqn : constraint1 ] ) that , with the additional constraints given in ( [ eqn : constraint1 ] ) , problem ( p1 ) can be simplified as the following power control problem . and denote the optimal power solution to problem ( [ eqn : fix quantization in p1 ] ) and the relaxed version of problem ( [ eqn : fix quantization in p1 ] ) without the first fronthaul link constraint , respectively .if , we have , .otherwise , it can be shown that any feasible solution to the following problem is optimal to problem ( [ eqn : fix quantization in p1 ] ) : summarize , the cut - set bound based optimal value of problem ( [ eqn : fix quantization in p1 ] ) is expressed as in the following , we compare this optimal value with the capacity upper bound given in ( [ eqn : capacity upper bound ] ) .first , we have is because is the optimal power solution to problem ( [ eqn : fix quantization in p1 ] ) without the fronthaul link constraint .it then follows that [ proposition3 ] is thus proved .first , it follows that a result , is a feasible solution to problem ( p2-noint ) .furthermore , with , given in ( [ eqn : uniform quantization sum - rate ] ) reduces to then follows that [ proposition4 ] is thus proved .define , it can be shown that is concave over , . as a result , is concave over , . according to the composition rule , is concave over , .it then follows that the objective function of problem ( [ eqn : p5 ] ) , i.e. , , is concave over .lemma [ lemma1 ] is thus proved .first , due to the inequality given in ( [ eqn : first order approximation ] ) , any feasible solution to problem ( [ eqn : p7 ] ) must be a feasible solution to problem ( [ eqn : p6 ] ) .thereby , s must be feasible to problem ( [ eqn : p6 ] ) .next , it can be observed that if s is feasible to problem ( [ eqn : p6 ] ) , it must be feasible to problem ( [ eqn : p7 ] ) . since s is the optimal solution to problem ( [ eqn : p7 ] ) , the sum - rate achieved by it must be no smaller than that achieved by s .lemma [ lemma2 ] is thus proved .d. gesbert , s. hanly , h. huang , s. shamai , o. simeone , and w. yu , `` multi - cell mimo cooperative networks : a new look at interference , '' _ ieee j. sel .areas commun .9 , pp . 1380 - 1408 , dec .2010 .s. h. park , o. simeone , o. sahin , and s. shamai , `` robust and efficient distributed compression for cloud radio access networks , '' _ ieee trans .vehicular technology _692 - 703 , feb . 2013 .l. zhou and w. yu , `` uplink multicell processing with limited backhaul via per - base - station successive interference cancellation , '' _ ieee j. sel .areas commun .1981 - 1993 , oct . 2013 .s. h. park , o. simeone , o. sahin , and s. shamai , `` joint precoding and multivariate backhaul compression for the downlink of cloud radio access networks , '' _ ieee trans .signal process .5646 - 5658 , nov .
the performance of cloud radio access network ( c - ran ) is constrained by the limited fronthaul link capacity under future heavy data traffic . to tackle this problem , extensive efforts have been devoted to design efficient signal quantization / compression techniques in the fronthaul to maximize the network throughput . however , most of the previous results are based on information - theoretical quantization methods , which are hard to implement practically due to the high complexity . in this paper , we propose using practical uniform scalar quantization in the uplink communication of an orthogonal frequency division multiple access ( ofdma ) based c - ran system , where the mobile users are assigned with orthogonal sub - carriers for transmission . in particular , we study the joint wireless power control and fronthaul quantization design over the sub - carriers to maximize the system throughput . efficient algorithms are proposed to solve the joint optimization problem when either information - theoretical or practical fronthaul quantization method is applied . we show that the fronthaul capacity constraints have significant impact to the optimal wireless power control policy . as a result , the joint optimization shows significant performance gain compared with optimizing only wireless power control or fronthaul quantization . besides , we also show that the proposed simple uniform quantization scheme performs very close to the throughput performance upper bound , and in fact overlaps with the upper bound when the fronthaul capacity is sufficiently large . overall , our results reveal practically achievable throughput performance of c - ran for its efficient deployment in the next - generation wireless communication systems . cloud radio access network ( c - ran ) , fronthaul constraint , quantize - and - forward , orthogonal frequency division multiple access ( ofdma ) , power control , throughput maximization . [ section ] [ section ] [ section ] [ section ] [ section ] [ section ] [ section ] [ section ]